Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1453734 Details for
Bug 1594176
Error response from daemon: No such container: ceph-mon-controller-0
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
RC9 /var/lib/mistral/xyz/ansible.log
ansible.log (text/plain), 6.50 MB, created by
Filip Hubík
on 2018-06-22 13:49:49 UTC
(
hide
)
Description:
RC9 /var/lib/mistral/xyz/ansible.log
Filename:
MIME Type:
Creator:
Filip Hubík
Created:
2018-06-22 13:49:49 UTC
Size:
6.50 MB
patch
obsolete
>2018-06-22 09:03:50,530 p=21516 u=mistral | Using /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ansible.cfg as config file >2018-06-22 09:03:51,160 p=21516 u=mistral | PLAY [Gather facts from undercloud] ******************************************** >2018-06-22 09:03:51,170 p=21516 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-06-22 09:03:51,877 p=21516 u=mistral | ok: [undercloud] >2018-06-22 09:03:51,891 p=21516 u=mistral | PLAY [Gather facts from overcloud] ********************************************* >2018-06-22 09:03:51,903 p=21516 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-06-22 09:03:54,888 p=21516 u=mistral | ok: [compute-0] >2018-06-22 09:03:55,349 p=21516 u=mistral | ok: [controller-0] >2018-06-22 09:03:55,420 p=21516 u=mistral | ok: [ceph-0] >2018-06-22 09:03:55,433 p=21516 u=mistral | PLAY [Load global variables] *************************************************** >2018-06-22 09:03:55,456 p=21516 u=mistral | TASK [include_vars] ************************************************************ >2018-06-22 09:03:55,526 p=21516 u=mistral | ok: [controller-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.14,ceph-0.localdomain,ceph-0,172.17.3.14,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.16,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.10,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.10,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.10,ceph-0.external.localdomain,ceph-0.external,192.168.24.10,ceph-0.management.localdomain,ceph-0.management,192.168.24.10,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.21,compute-0.localdomain,compute-0,172.17.3.10,compute-0.storage.localdomain,compute-0.storage,192.168.24.15,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.21,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.10,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.15,compute-0.external.localdomain,compute-0.external,192.168.24.15,compute-0.management.localdomain,compute-0.management,192.168.24.15,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.16,controller-0.localdomain,controller-0,172.17.3.18,controller-0.storage.localdomain,controller-0.storage,172.17.4.17,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.16,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.15,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.104,controller-0.external.localdomain,controller-0.external,192.168.24.8,controller-0.management.localdomain,controller-0.management,192.168.24.8,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/global_vars.yaml"], "changed": false} >2018-06-22 09:03:55,527 p=21516 u=mistral | ok: [compute-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.14,ceph-0.localdomain,ceph-0,172.17.3.14,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.16,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.10,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.10,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.10,ceph-0.external.localdomain,ceph-0.external,192.168.24.10,ceph-0.management.localdomain,ceph-0.management,192.168.24.10,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.21,compute-0.localdomain,compute-0,172.17.3.10,compute-0.storage.localdomain,compute-0.storage,192.168.24.15,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.21,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.10,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.15,compute-0.external.localdomain,compute-0.external,192.168.24.15,compute-0.management.localdomain,compute-0.management,192.168.24.15,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.16,controller-0.localdomain,controller-0,172.17.3.18,controller-0.storage.localdomain,controller-0.storage,172.17.4.17,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.16,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.15,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.104,controller-0.external.localdomain,controller-0.external,192.168.24.8,controller-0.management.localdomain,controller-0.management,192.168.24.8,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/global_vars.yaml"], "changed": false} >2018-06-22 09:03:55,528 p=21516 u=mistral | ok: [undercloud] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.14,ceph-0.localdomain,ceph-0,172.17.3.14,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.16,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.10,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.10,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.10,ceph-0.external.localdomain,ceph-0.external,192.168.24.10,ceph-0.management.localdomain,ceph-0.management,192.168.24.10,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.21,compute-0.localdomain,compute-0,172.17.3.10,compute-0.storage.localdomain,compute-0.storage,192.168.24.15,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.21,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.10,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.15,compute-0.external.localdomain,compute-0.external,192.168.24.15,compute-0.management.localdomain,compute-0.management,192.168.24.15,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.16,controller-0.localdomain,controller-0,172.17.3.18,controller-0.storage.localdomain,controller-0.storage,172.17.4.17,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.16,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.15,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.104,controller-0.external.localdomain,controller-0.external,192.168.24.8,controller-0.management.localdomain,controller-0.management,192.168.24.8,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/global_vars.yaml"], "changed": false} >2018-06-22 09:03:55,552 p=21516 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.14,ceph-0.localdomain,ceph-0,172.17.3.14,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.16,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.10,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.10,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.10,ceph-0.external.localdomain,ceph-0.external,192.168.24.10,ceph-0.management.localdomain,ceph-0.management,192.168.24.10,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.21,compute-0.localdomain,compute-0,172.17.3.10,compute-0.storage.localdomain,compute-0.storage,192.168.24.15,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.21,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.10,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.15,compute-0.external.localdomain,compute-0.external,192.168.24.15,compute-0.management.localdomain,compute-0.management,192.168.24.15,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.16,controller-0.localdomain,controller-0,172.17.3.18,controller-0.storage.localdomain,controller-0.storage,172.17.4.17,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.16,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.15,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.104,controller-0.external.localdomain,controller-0.external,192.168.24.8,controller-0.management.localdomain,controller-0.management,192.168.24.8,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/global_vars.yaml"], "changed": false} >2018-06-22 09:03:55,560 p=21516 u=mistral | PLAY [Common roles for TripleO servers] **************************************** >2018-06-22 09:03:55,582 p=21516 u=mistral | TASK [tripleo-bootstrap : Deploy required packages to bootstrap TripleO] ******* >2018-06-22 09:03:56,392 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180605100743.235e1ae.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-06-22 09:03:56,423 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180605100743.235e1ae.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-06-22 09:03:56,429 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180605100743.235e1ae.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-06-22 09:03:56,452 p=21516 u=mistral | TASK [tripleo-bootstrap : Create /var/lib/heat-config/tripleo-config-download directory for deployment data] *** >2018-06-22 09:03:56,899 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:03:56,914 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:03:56,918 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:03:56,943 p=21516 u=mistral | TASK [tripleo-ssh-known-hosts : Template /etc/ssh/ssh_known_hosts] ************* >2018-06-22 09:03:57,883 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "4c58c062bd5785d60c4ef72dde02cef16d818aa7", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "26268d9915d132f1f86d896bec72aff7", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1906, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672637.03-9892733577746/source", "state": "file", "uid": 0} >2018-06-22 09:03:57,889 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "4c58c062bd5785d60c4ef72dde02cef16d818aa7", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "26268d9915d132f1f86d896bec72aff7", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1906, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672636.98-248750625462241/source", "state": "file", "uid": 0} >2018-06-22 09:03:57,902 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "4c58c062bd5785d60c4ef72dde02cef16d818aa7", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "26268d9915d132f1f86d896bec72aff7", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 1906, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672637.0-133565432331994/source", "state": "file", "uid": 0} >2018-06-22 09:03:57,909 p=21516 u=mistral | PLAY [Overcloud deploy step tasks for step 0] ********************************** >2018-06-22 09:03:57,935 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:03:57,964 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:57,989 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:58,001 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:58,025 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:03:58,052 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:58,078 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:58,089 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:58,110 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:03:58,160 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:58,160 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:58,171 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:58,193 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:03:58,219 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:58,243 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:58,255 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:58,275 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:03:58,300 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:58,319 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:58,332 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:58,337 p=21516 u=mistral | PLAY [Server deployments] ****************************************************** >2018-06-22 09:03:58,358 p=21516 u=mistral | TASK [include] ***************************************************************** >2018-06-22 09:03:58,574 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Controller/deployments.yaml for controller-0 >2018-06-22 09:03:58,583 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Controller/deployments.yaml for controller-0 >2018-06-22 09:03:58,591 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Controller/deployments.yaml for controller-0 >2018-06-22 09:03:58,599 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Controller/deployments.yaml for controller-0 >2018-06-22 09:03:58,608 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Controller/deployments.yaml for controller-0 >2018-06-22 09:03:58,616 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Controller/deployments.yaml for controller-0 >2018-06-22 09:03:58,625 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Controller/deployments.yaml for controller-0 >2018-06-22 09:03:58,633 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Controller/deployments.yaml for controller-0 >2018-06-22 09:03:58,657 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:03:58,715 p=21516 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "eb5b74de-ea3d-4884-9a34-a70e159ec7a5"}, "changed": false} >2018-06-22 09:03:58,738 p=21516 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-06-22 09:03:59,339 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "2dc06746e8fe0ff8e6d253693eb73eb03a07050a", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-eb5b74de-ea3d-4884-9a34-a70e159ec7a5", "gid": 0, "group": "root", "md5sum": "80880ce6885e90f4f5ccf7c8a9d0fec1", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 10195, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672638.79-124269984855395/source", "state": "file", "uid": 0} >2018-06-22 09:03:59,363 p=21516 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-06-22 09:03:59,678 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:03:59,701 p=21516 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-06-22 09:03:59,719 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:59,742 p=21516 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-06-22 09:03:59,758 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:59,780 p=21516 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-06-22 09:03:59,795 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:03:59,816 p=21516 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-06-22 09:04:28,956 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.notify.json)", "delta": "0:00:28.658197", "end": "2018-06-22 09:04:28.948490", "rc": 0, "start": "2018-06-22 09:04:00.290293", "stderr": "[2018-06-22 09:04:00,317] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.json\n[2018-06-22 09:04:28,535] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 09:04:00 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 09:04:00 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 09:04:00 AM] [INFO] Not using any mapping file.\\n[2018/06/22 09:04:01 AM] [INFO] Finding active nics\\n[2018/06/22 09:04:01 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 09:04:01 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 09:04:01 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 09:04:01 AM] [INFO] lo is not an active nic\\n[2018/06/22 09:04:01 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 09:04:01 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 09:04:01 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 09:04:01 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 09:04:01 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 09:04:01 AM] [INFO] adding interface: eth0\\n[2018/06/22 09:04:01 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 09:04:01 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] adding interface: eth1\\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan20\\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan40\\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan50\\n[2018/06/22 09:04:01 AM] [INFO] adding bridge: br-ex\\n[2018/06/22 09:04:01 AM] [INFO] adding custom route for interface: br-ex\\n[2018/06/22 09:04:01 AM] [INFO] adding interface: eth2\\n[2018/06/22 09:04:01 AM] [INFO] applying network configs...\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 09:04:01 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] running ifup on bridge: br-ex\\n[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth2\\n[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 09:04:10 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/22 09:04:14 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 09:04:19 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:04:23 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 09:04:28 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-06-22 09:04:28,535] (heat-config) [DEBUG] [2018-06-22 09:04:00,340] (heat-config) [INFO] interface_name=nic1\n[2018-06-22 09:04:00,341] (heat-config) [INFO] bridge_name=br-ex\n[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307\n[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-jqhkwynwtsyb-0-ybim2xtdm545-NetworkDeployment-mmf2k6d2yqmq-TripleOSoftwareDeployment-ktdyhjwhoklp/181a2572-6d7f-4029-a0c5-268d01163402\n[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:04:00,341] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/eb5b74de-ea3d-4884-9a34-a70e159ec7a5\n[2018-06-22 09:04:28,530] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-06-22 09:04:28,530] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/06/22 09:04:00 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/06/22 09:04:00 AM] [INFO] Ifcfg net config provider created.\n[2018/06/22 09:04:00 AM] [INFO] Not using any mapping file.\n[2018/06/22 09:04:01 AM] [INFO] Finding active nics\n[2018/06/22 09:04:01 AM] [INFO] eth0 is an embedded active nic\n[2018/06/22 09:04:01 AM] [INFO] eth1 is an embedded active nic\n[2018/06/22 09:04:01 AM] [INFO] eth2 is an embedded active nic\n[2018/06/22 09:04:01 AM] [INFO] lo is not an active nic\n[2018/06/22 09:04:01 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/06/22 09:04:01 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/06/22 09:04:01 AM] [INFO] nic3 mapped to: eth2\n[2018/06/22 09:04:01 AM] [INFO] nic2 mapped to: eth1\n[2018/06/22 09:04:01 AM] [INFO] nic1 mapped to: eth0\n[2018/06/22 09:04:01 AM] [INFO] adding interface: eth0\n[2018/06/22 09:04:01 AM] [INFO] adding custom route for interface: eth0\n[2018/06/22 09:04:01 AM] [INFO] adding bridge: br-isolated\n[2018/06/22 09:04:01 AM] [INFO] adding interface: eth1\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan20\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan30\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan40\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan50\n[2018/06/22 09:04:01 AM] [INFO] adding bridge: br-ex\n[2018/06/22 09:04:01 AM] [INFO] adding custom route for interface: br-ex\n[2018/06/22 09:04:01 AM] [INFO] adding interface: eth2\n[2018/06/22 09:04:01 AM] [INFO] applying network configs...\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth2\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth1\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth0\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on bridge: br-ex\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/06/22 09:04:01 AM] [INFO] running ifup on bridge: br-isolated\n[2018/06/22 09:04:01 AM] [INFO] running ifup on bridge: br-ex\n[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth2\n[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth1\n[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth0\n[2018/06/22 09:04:10 AM] [INFO] running ifup on interface: vlan50\n[2018/06/22 09:04:14 AM] [INFO] running ifup on interface: vlan20\n[2018/06/22 09:04:19 AM] [INFO] running ifup on interface: vlan30\n[2018/06/22 09:04:23 AM] [INFO] running ifup on interface: vlan40\n[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan20\n[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan30\n[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan40\n[2018/06/22 09:04:28 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-06-22 09:04:28,531] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/eb5b74de-ea3d-4884-9a34-a70e159ec7a5\n\n[2018-06-22 09:04:28,535] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:04:28,536] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.json < /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.notify.json\n[2018-06-22 09:04:28,940] (heat-config) [INFO] \n[2018-06-22 09:04:28,940] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:04:00,317] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.json", "[2018-06-22 09:04:28,535] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 09:04:00 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 09:04:00 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 09:04:00 AM] [INFO] Not using any mapping file.\\n[2018/06/22 09:04:01 AM] [INFO] Finding active nics\\n[2018/06/22 09:04:01 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 09:04:01 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 09:04:01 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 09:04:01 AM] [INFO] lo is not an active nic\\n[2018/06/22 09:04:01 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 09:04:01 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 09:04:01 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 09:04:01 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 09:04:01 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 09:04:01 AM] [INFO] adding interface: eth0\\n[2018/06/22 09:04:01 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 09:04:01 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] adding interface: eth1\\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan20\\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan40\\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan50\\n[2018/06/22 09:04:01 AM] [INFO] adding bridge: br-ex\\n[2018/06/22 09:04:01 AM] [INFO] adding custom route for interface: br-ex\\n[2018/06/22 09:04:01 AM] [INFO] adding interface: eth2\\n[2018/06/22 09:04:01 AM] [INFO] applying network configs...\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 09:04:01 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] running ifup on bridge: br-ex\\n[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth2\\n[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 09:04:10 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/22 09:04:14 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 09:04:19 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:04:23 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 09:04:28 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-06-22 09:04:28,535] (heat-config) [DEBUG] [2018-06-22 09:04:00,340] (heat-config) [INFO] interface_name=nic1", "[2018-06-22 09:04:00,341] (heat-config) [INFO] bridge_name=br-ex", "[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", "[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-jqhkwynwtsyb-0-ybim2xtdm545-NetworkDeployment-mmf2k6d2yqmq-TripleOSoftwareDeployment-ktdyhjwhoklp/181a2572-6d7f-4029-a0c5-268d01163402", "[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:04:00,341] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/eb5b74de-ea3d-4884-9a34-a70e159ec7a5", "[2018-06-22 09:04:28,530] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-06-22 09:04:28,530] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/06/22 09:04:00 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/06/22 09:04:00 AM] [INFO] Ifcfg net config provider created.", "[2018/06/22 09:04:00 AM] [INFO] Not using any mapping file.", "[2018/06/22 09:04:01 AM] [INFO] Finding active nics", "[2018/06/22 09:04:01 AM] [INFO] eth0 is an embedded active nic", "[2018/06/22 09:04:01 AM] [INFO] eth1 is an embedded active nic", "[2018/06/22 09:04:01 AM] [INFO] eth2 is an embedded active nic", "[2018/06/22 09:04:01 AM] [INFO] lo is not an active nic", "[2018/06/22 09:04:01 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/06/22 09:04:01 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/06/22 09:04:01 AM] [INFO] nic3 mapped to: eth2", "[2018/06/22 09:04:01 AM] [INFO] nic2 mapped to: eth1", "[2018/06/22 09:04:01 AM] [INFO] nic1 mapped to: eth0", "[2018/06/22 09:04:01 AM] [INFO] adding interface: eth0", "[2018/06/22 09:04:01 AM] [INFO] adding custom route for interface: eth0", "[2018/06/22 09:04:01 AM] [INFO] adding bridge: br-isolated", "[2018/06/22 09:04:01 AM] [INFO] adding interface: eth1", "[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan20", "[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan30", "[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan40", "[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan50", "[2018/06/22 09:04:01 AM] [INFO] adding bridge: br-ex", "[2018/06/22 09:04:01 AM] [INFO] adding custom route for interface: br-ex", "[2018/06/22 09:04:01 AM] [INFO] adding interface: eth2", "[2018/06/22 09:04:01 AM] [INFO] applying network configs...", "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth2", "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth1", "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth0", "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/22 09:04:01 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/06/22 09:04:01 AM] [INFO] running ifdown on bridge: br-ex", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/06/22 09:04:01 AM] [INFO] running ifup on bridge: br-isolated", "[2018/06/22 09:04:01 AM] [INFO] running ifup on bridge: br-ex", "[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth2", "[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth1", "[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth0", "[2018/06/22 09:04:10 AM] [INFO] running ifup on interface: vlan50", "[2018/06/22 09:04:14 AM] [INFO] running ifup on interface: vlan20", "[2018/06/22 09:04:19 AM] [INFO] running ifup on interface: vlan30", "[2018/06/22 09:04:23 AM] [INFO] running ifup on interface: vlan40", "[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan20", "[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan30", "[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan40", "[2018/06/22 09:04:28 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-06-22 09:04:28,531] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/eb5b74de-ea3d-4884-9a34-a70e159ec7a5", "", "[2018-06-22 09:04:28,535] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:04:28,536] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.json < /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.notify.json", "[2018-06-22 09:04:28,940] (heat-config) [INFO] ", "[2018-06-22 09:04:28,940] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:04:28,984 p=21516 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-06-22 09:04:29,037 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:04:00,317] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.json", > "[2018-06-22 09:04:28,535] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.8/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.18/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.17/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.15/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.104/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 09:04:00 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 09:04:00 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 09:04:00 AM] [INFO] Not using any mapping file.\\n[2018/06/22 09:04:01 AM] [INFO] Finding active nics\\n[2018/06/22 09:04:01 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 09:04:01 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 09:04:01 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 09:04:01 AM] [INFO] lo is not an active nic\\n[2018/06/22 09:04:01 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 09:04:01 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 09:04:01 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 09:04:01 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 09:04:01 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 09:04:01 AM] [INFO] adding interface: eth0\\n[2018/06/22 09:04:01 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 09:04:01 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] adding interface: eth1\\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan20\\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan40\\n[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan50\\n[2018/06/22 09:04:01 AM] [INFO] adding bridge: br-ex\\n[2018/06/22 09:04:01 AM] [INFO] adding custom route for interface: br-ex\\n[2018/06/22 09:04:01 AM] [INFO] adding interface: eth2\\n[2018/06/22 09:04:01 AM] [INFO] applying network configs...\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 09:04:01 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 09:04:01 AM] [INFO] running ifup on bridge: br-ex\\n[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth2\\n[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 09:04:10 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/22 09:04:14 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 09:04:19 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:04:23 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 09:04:28 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-06-22 09:04:28,535] (heat-config) [DEBUG] [2018-06-22 09:04:00,340] (heat-config) [INFO] interface_name=nic1", > "[2018-06-22 09:04:00,341] (heat-config) [INFO] bridge_name=br-ex", > "[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", > "[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-jqhkwynwtsyb-0-ybim2xtdm545-NetworkDeployment-mmf2k6d2yqmq-TripleOSoftwareDeployment-ktdyhjwhoklp/181a2572-6d7f-4029-a0c5-268d01163402", > "[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:04:00,341] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:04:00,341] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/eb5b74de-ea3d-4884-9a34-a70e159ec7a5", > "[2018-06-22 09:04:28,530] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-06-22 09:04:28,530] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.8/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.18/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.17/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.15/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.104/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/06/22 09:04:00 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/06/22 09:04:00 AM] [INFO] Ifcfg net config provider created.", > "[2018/06/22 09:04:00 AM] [INFO] Not using any mapping file.", > "[2018/06/22 09:04:01 AM] [INFO] Finding active nics", > "[2018/06/22 09:04:01 AM] [INFO] eth0 is an embedded active nic", > "[2018/06/22 09:04:01 AM] [INFO] eth1 is an embedded active nic", > "[2018/06/22 09:04:01 AM] [INFO] eth2 is an embedded active nic", > "[2018/06/22 09:04:01 AM] [INFO] lo is not an active nic", > "[2018/06/22 09:04:01 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/06/22 09:04:01 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/06/22 09:04:01 AM] [INFO] nic3 mapped to: eth2", > "[2018/06/22 09:04:01 AM] [INFO] nic2 mapped to: eth1", > "[2018/06/22 09:04:01 AM] [INFO] nic1 mapped to: eth0", > "[2018/06/22 09:04:01 AM] [INFO] adding interface: eth0", > "[2018/06/22 09:04:01 AM] [INFO] adding custom route for interface: eth0", > "[2018/06/22 09:04:01 AM] [INFO] adding bridge: br-isolated", > "[2018/06/22 09:04:01 AM] [INFO] adding interface: eth1", > "[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan20", > "[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan30", > "[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan40", > "[2018/06/22 09:04:01 AM] [INFO] adding vlan: vlan50", > "[2018/06/22 09:04:01 AM] [INFO] adding bridge: br-ex", > "[2018/06/22 09:04:01 AM] [INFO] adding custom route for interface: br-ex", > "[2018/06/22 09:04:01 AM] [INFO] adding interface: eth2", > "[2018/06/22 09:04:01 AM] [INFO] applying network configs...", > "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth2", > "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth1", > "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: eth0", > "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/22 09:04:01 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/22 09:04:01 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/06/22 09:04:01 AM] [INFO] running ifdown on bridge: br-ex", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/06/22 09:04:01 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/06/22 09:04:01 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/06/22 09:04:01 AM] [INFO] running ifup on bridge: br-ex", > "[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth2", > "[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth1", > "[2018/06/22 09:04:06 AM] [INFO] running ifup on interface: eth0", > "[2018/06/22 09:04:10 AM] [INFO] running ifup on interface: vlan50", > "[2018/06/22 09:04:14 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/22 09:04:19 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/22 09:04:23 AM] [INFO] running ifup on interface: vlan40", > "[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/22 09:04:27 AM] [INFO] running ifup on interface: vlan40", > "[2018/06/22 09:04:28 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-06-22 09:04:28,531] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/eb5b74de-ea3d-4884-9a34-a70e159ec7a5", > "", > "[2018-06-22 09:04:28,535] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:04:28,536] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.json < /var/lib/heat-config/deployed/eb5b74de-ea3d-4884-9a34-a70e159ec7a5.notify.json", > "[2018-06-22 09:04:28,940] (heat-config) [INFO] ", > "[2018-06-22 09:04:28,940] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:04:29,061 p=21516 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-06-22 09:04:29,078 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:29,099 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:04:29,147 p=21516 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "133f50ba-6071-42f7-9ef0-8985c2e1c247"}, "changed": false} >2018-06-22 09:04:29,168 p=21516 u=mistral | TASK [Render deployment file for ControllerUpgradeInitDeployment] ************** >2018-06-22 09:04:29,785 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "7c3fd82d078a69fa0d51f62eeacf9eebeb4297b5", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerUpgradeInitDeployment-133f50ba-6071-42f7-9ef0-8985c2e1c247", "gid": 0, "group": "root", "md5sum": "5ac28a00744b34d5d1dd2b66edb2d4a5", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1183, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672669.22-211294889453513/source", "state": "file", "uid": 0} >2018-06-22 09:04:29,810 p=21516 u=mistral | TASK [Check if deployed file exists for ControllerUpgradeInitDeployment] ******* >2018-06-22 09:04:30,134 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:04:30,160 p=21516 u=mistral | TASK [Check previous deployment rc for ControllerUpgradeInitDeployment] ******** >2018-06-22 09:04:30,176 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:30,201 p=21516 u=mistral | TASK [Remove deployed file for ControllerUpgradeInitDeployment when previous deployment failed] *** >2018-06-22 09:04:30,217 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:30,242 p=21516 u=mistral | TASK [Force remove deployed file for ControllerUpgradeInitDeployment] ********** >2018-06-22 09:04:30,257 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:30,280 p=21516 u=mistral | TASK [Run deployment ControllerUpgradeInitDeployment] ************************** >2018-06-22 09:04:31,063 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.notify.json)", "delta": "0:00:00.449901", "end": "2018-06-22 09:04:31.073774", "rc": 0, "start": "2018-06-22 09:04:30.623873", "stderr": "[2018-06-22 09:04:30,647] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.json\n[2018-06-22 09:04:30,676] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:04:30,676] (heat-config) [DEBUG] [2018-06-22 09:04:30,668] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307\n[2018-06-22 09:04:30,669] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:04:30,669] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-jqhkwynwtsyb-0-ybim2xtdm545-ControllerUpgradeInitDeployment-42lxkwjegpya/70a0c93b-86c4-41bc-b021-345deed4f629\n[2018-06-22 09:04:30,669] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:04:30,669] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:04:30,669] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/133f50ba-6071-42f7-9ef0-8985c2e1c247\n[2018-06-22 09:04:30,673] (heat-config) [INFO] \n[2018-06-22 09:04:30,673] (heat-config) [DEBUG] \n[2018-06-22 09:04:30,673] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/133f50ba-6071-42f7-9ef0-8985c2e1c247\n\n[2018-06-22 09:04:30,676] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:04:30,676] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.json < /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.notify.json\n[2018-06-22 09:04:31,067] (heat-config) [INFO] \n[2018-06-22 09:04:31,068] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:04:30,647] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.json", "[2018-06-22 09:04:30,676] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:04:30,676] (heat-config) [DEBUG] [2018-06-22 09:04:30,668] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", "[2018-06-22 09:04:30,669] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:04:30,669] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-jqhkwynwtsyb-0-ybim2xtdm545-ControllerUpgradeInitDeployment-42lxkwjegpya/70a0c93b-86c4-41bc-b021-345deed4f629", "[2018-06-22 09:04:30,669] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:04:30,669] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:04:30,669] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/133f50ba-6071-42f7-9ef0-8985c2e1c247", "[2018-06-22 09:04:30,673] (heat-config) [INFO] ", "[2018-06-22 09:04:30,673] (heat-config) [DEBUG] ", "[2018-06-22 09:04:30,673] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/133f50ba-6071-42f7-9ef0-8985c2e1c247", "", "[2018-06-22 09:04:30,676] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:04:30,676] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.json < /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.notify.json", "[2018-06-22 09:04:31,067] (heat-config) [INFO] ", "[2018-06-22 09:04:31,068] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:04:31,086 p=21516 u=mistral | TASK [Output for ControllerUpgradeInitDeployment] ****************************** >2018-06-22 09:04:31,132 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:04:30,647] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.json", > "[2018-06-22 09:04:30,676] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:04:30,676] (heat-config) [DEBUG] [2018-06-22 09:04:30,668] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", > "[2018-06-22 09:04:30,669] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:04:30,669] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-jqhkwynwtsyb-0-ybim2xtdm545-ControllerUpgradeInitDeployment-42lxkwjegpya/70a0c93b-86c4-41bc-b021-345deed4f629", > "[2018-06-22 09:04:30,669] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:04:30,669] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:04:30,669] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/133f50ba-6071-42f7-9ef0-8985c2e1c247", > "[2018-06-22 09:04:30,673] (heat-config) [INFO] ", > "[2018-06-22 09:04:30,673] (heat-config) [DEBUG] ", > "[2018-06-22 09:04:30,673] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/133f50ba-6071-42f7-9ef0-8985c2e1c247", > "", > "[2018-06-22 09:04:30,676] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:04:30,676] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.json < /var/lib/heat-config/deployed/133f50ba-6071-42f7-9ef0-8985c2e1c247.notify.json", > "[2018-06-22 09:04:31,067] (heat-config) [INFO] ", > "[2018-06-22 09:04:31,068] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:04:31,155 p=21516 u=mistral | TASK [Check-mode for Run deployment ControllerUpgradeInitDeployment] *********** >2018-06-22 09:04:31,168 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:31,190 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:04:31,553 p=21516 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "a2e0b917-2a7d-4a88-8de4-76ab218afc6f"}, "changed": false} >2018-06-22 09:04:31,575 p=21516 u=mistral | TASK [Render deployment file for ControllerDeployment] ************************* >2018-06-22 09:04:32,549 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "8ed3d7352dee9c8aedfe073617d894afc0080dfb", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerDeployment-a2e0b917-2a7d-4a88-8de4-76ab218afc6f", "gid": 0, "group": "root", "md5sum": "66bcf6ab487190ea2dd70b0570e648ad", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 73456, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672671.98-185962679511394/source", "state": "file", "uid": 0} >2018-06-22 09:04:32,573 p=21516 u=mistral | TASK [Check if deployed file exists for ControllerDeployment] ****************** >2018-06-22 09:04:32,888 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:04:32,914 p=21516 u=mistral | TASK [Check previous deployment rc for ControllerDeployment] ******************* >2018-06-22 09:04:32,931 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:32,953 p=21516 u=mistral | TASK [Remove deployed file for ControllerDeployment when previous deployment failed] *** >2018-06-22 09:04:32,970 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:32,992 p=21516 u=mistral | TASK [Force remove deployed file for ControllerDeployment] ********************* >2018-06-22 09:04:33,007 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:33,029 p=21516 u=mistral | TASK [Run deployment ControllerDeployment] ************************************* >2018-06-22 09:04:33,935 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/a2e0b917-2a7d-4a88-8de4-76ab218afc6f.notify.json)", "delta": "0:00:00.535471", "end": "2018-06-22 09:04:33.944933", "rc": 0, "start": "2018-06-22 09:04:33.409462", "stderr": "[2018-06-22 09:04:33,440] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/a2e0b917-2a7d-4a88-8de4-76ab218afc6f.json\n[2018-06-22 09:04:33,560] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:04:33,560] (heat-config) [DEBUG] \n[2018-06-22 09:04:33,560] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-22 09:04:33,560] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a2e0b917-2a7d-4a88-8de4-76ab218afc6f.json < /var/lib/heat-config/deployed/a2e0b917-2a7d-4a88-8de4-76ab218afc6f.notify.json\n[2018-06-22 09:04:33,938] (heat-config) [INFO] \n[2018-06-22 09:04:33,938] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:04:33,440] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/a2e0b917-2a7d-4a88-8de4-76ab218afc6f.json", "[2018-06-22 09:04:33,560] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:04:33,560] (heat-config) [DEBUG] ", "[2018-06-22 09:04:33,560] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-22 09:04:33,560] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a2e0b917-2a7d-4a88-8de4-76ab218afc6f.json < /var/lib/heat-config/deployed/a2e0b917-2a7d-4a88-8de4-76ab218afc6f.notify.json", "[2018-06-22 09:04:33,938] (heat-config) [INFO] ", "[2018-06-22 09:04:33,938] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:04:33,959 p=21516 u=mistral | TASK [Output for ControllerDeployment] ***************************************** >2018-06-22 09:04:34,003 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:04:33,440] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/a2e0b917-2a7d-4a88-8de4-76ab218afc6f.json", > "[2018-06-22 09:04:33,560] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:04:33,560] (heat-config) [DEBUG] ", > "[2018-06-22 09:04:33,560] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-22 09:04:33,560] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a2e0b917-2a7d-4a88-8de4-76ab218afc6f.json < /var/lib/heat-config/deployed/a2e0b917-2a7d-4a88-8de4-76ab218afc6f.notify.json", > "[2018-06-22 09:04:33,938] (heat-config) [INFO] ", > "[2018-06-22 09:04:33,938] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:04:34,025 p=21516 u=mistral | TASK [Check-mode for Run deployment ControllerDeployment] ********************** >2018-06-22 09:04:34,037 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:34,059 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:04:34,108 p=21516 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "bf6fa48b-3a96-4cd5-a95c-e5254649671f"}, "changed": false} >2018-06-22 09:04:34,131 p=21516 u=mistral | TASK [Render deployment file for ControllerHostsDeployment] ******************** >2018-06-22 09:04:34,741 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "fb4d4f009e5f5f5ff1b1c65d3446b7a69e6a61a6", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostsDeployment-bf6fa48b-3a96-4cd5-a95c-e5254649671f", "gid": 0, "group": "root", "md5sum": "73ffeffc16a11044bd23dd7fe5242237", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4085, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672674.18-183795544889303/source", "state": "file", "uid": 0} >2018-06-22 09:04:34,766 p=21516 u=mistral | TASK [Check if deployed file exists for ControllerHostsDeployment] ************* >2018-06-22 09:04:35,139 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:04:35,162 p=21516 u=mistral | TASK [Check previous deployment rc for ControllerHostsDeployment] ************** >2018-06-22 09:04:35,180 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:35,201 p=21516 u=mistral | TASK [Remove deployed file for ControllerHostsDeployment when previous deployment failed] *** >2018-06-22 09:04:35,217 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:35,238 p=21516 u=mistral | TASK [Force remove deployed file for ControllerHostsDeployment] **************** >2018-06-22 09:04:35,253 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:35,274 p=21516 u=mistral | TASK [Run deployment ControllerHostsDeployment] ******************************** >2018-06-22 09:04:36,171 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.notify.json)", "delta": "0:00:00.492536", "end": "2018-06-22 09:04:36.155433", "rc": 0, "start": "2018-06-22 09:04:35.662897", "stderr": "[2018-06-22 09:04:35,685] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.json\n[2018-06-22 09:04:35,723] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-06-22 09:04:35,723] (heat-config) [DEBUG] [2018-06-22 09:04:35,706] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307\n[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-ktx2tirk4lao-0-luttmm6aujy7/ceb8fc96-fcec-460e-841f-0869d4795085\n[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:04:35,707] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bf6fa48b-3a96-4cd5-a95c-e5254649671f\n[2018-06-22 09:04:35,720] (heat-config) [INFO] \n[2018-06-22 09:04:35,720] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-06-22 09:04:35,720] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bf6fa48b-3a96-4cd5-a95c-e5254649671f\n\n[2018-06-22 09:04:35,723] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:04:35,724] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.json < /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.notify.json\n[2018-06-22 09:04:36,149] (heat-config) [INFO] \n[2018-06-22 09:04:36,149] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:04:35,685] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.json", "[2018-06-22 09:04:35,723] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-06-22 09:04:35,723] (heat-config) [DEBUG] [2018-06-22 09:04:35,706] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", "[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-ktx2tirk4lao-0-luttmm6aujy7/ceb8fc96-fcec-460e-841f-0869d4795085", "[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:04:35,707] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bf6fa48b-3a96-4cd5-a95c-e5254649671f", "[2018-06-22 09:04:35,720] (heat-config) [INFO] ", "[2018-06-22 09:04:35,720] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-06-22 09:04:35,720] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bf6fa48b-3a96-4cd5-a95c-e5254649671f", "", "[2018-06-22 09:04:35,723] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:04:35,724] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.json < /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.notify.json", "[2018-06-22 09:04:36,149] (heat-config) [INFO] ", "[2018-06-22 09:04:36,149] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:04:36,201 p=21516 u=mistral | TASK [Output for ControllerHostsDeployment] ************************************ >2018-06-22 09:04:36,318 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:04:35,685] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.json", > "[2018-06-22 09:04:35,723] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-06-22 09:04:35,723] (heat-config) [DEBUG] [2018-06-22 09:04:35,706] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", > "[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-ktx2tirk4lao-0-luttmm6aujy7/ceb8fc96-fcec-460e-841f-0869d4795085", > "[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:04:35,706] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:04:35,707] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/bf6fa48b-3a96-4cd5-a95c-e5254649671f", > "[2018-06-22 09:04:35,720] (heat-config) [INFO] ", > "[2018-06-22 09:04:35,720] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-06-22 09:04:35,720] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/bf6fa48b-3a96-4cd5-a95c-e5254649671f", > "", > "[2018-06-22 09:04:35,723] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:04:35,724] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.json < /var/lib/heat-config/deployed/bf6fa48b-3a96-4cd5-a95c-e5254649671f.notify.json", > "[2018-06-22 09:04:36,149] (heat-config) [INFO] ", > "[2018-06-22 09:04:36,149] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:04:36,346 p=21516 u=mistral | TASK [Check-mode for Run deployment ControllerHostsDeployment] ***************** >2018-06-22 09:04:36,360 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:36,382 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:04:36,554 p=21516 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "715cba81-2c66-4002-8a4a-b49e4bad71b8"}, "changed": false} >2018-06-22 09:04:36,576 p=21516 u=mistral | TASK [Render deployment file for ControllerAllNodesDeployment] ***************** >2018-06-22 09:04:37,343 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "6f9863af10167a29f48ec1049feb4379f9adaa22", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesDeployment-715cba81-2c66-4002-8a4a-b49e4bad71b8", "gid": 0, "group": "root", "md5sum": "ef2edd44e505cdb9a34fac9480cdc324", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19032, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672676.76-83397824129608/source", "state": "file", "uid": 0} >2018-06-22 09:04:37,365 p=21516 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesDeployment] ********** >2018-06-22 09:04:37,704 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:04:37,727 p=21516 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesDeployment] *********** >2018-06-22 09:04:37,745 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:37,767 p=21516 u=mistral | TASK [Remove deployed file for ControllerAllNodesDeployment when previous deployment failed] *** >2018-06-22 09:04:37,783 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:37,805 p=21516 u=mistral | TASK [Force remove deployed file for ControllerAllNodesDeployment] ************* >2018-06-22 09:04:37,820 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:37,842 p=21516 u=mistral | TASK [Run deployment ControllerAllNodesDeployment] ***************************** >2018-06-22 09:04:38,764 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/715cba81-2c66-4002-8a4a-b49e4bad71b8.notify.json)", "delta": "0:00:00.586834", "end": "2018-06-22 09:04:38.771525", "rc": 0, "start": "2018-06-22 09:04:38.184691", "stderr": "[2018-06-22 09:04:38,210] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/715cba81-2c66-4002-8a4a-b49e4bad71b8.json\n[2018-06-22 09:04:38,331] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:04:38,331] (heat-config) [DEBUG] \n[2018-06-22 09:04:38,331] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-22 09:04:38,332] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/715cba81-2c66-4002-8a4a-b49e4bad71b8.json < /var/lib/heat-config/deployed/715cba81-2c66-4002-8a4a-b49e4bad71b8.notify.json\n[2018-06-22 09:04:38,765] (heat-config) [INFO] \n[2018-06-22 09:04:38,765] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:04:38,210] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/715cba81-2c66-4002-8a4a-b49e4bad71b8.json", "[2018-06-22 09:04:38,331] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:04:38,331] (heat-config) [DEBUG] ", "[2018-06-22 09:04:38,331] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-22 09:04:38,332] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/715cba81-2c66-4002-8a4a-b49e4bad71b8.json < /var/lib/heat-config/deployed/715cba81-2c66-4002-8a4a-b49e4bad71b8.notify.json", "[2018-06-22 09:04:38,765] (heat-config) [INFO] ", "[2018-06-22 09:04:38,765] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:04:38,786 p=21516 u=mistral | TASK [Output for ControllerAllNodesDeployment] ********************************* >2018-06-22 09:04:38,829 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:04:38,210] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/715cba81-2c66-4002-8a4a-b49e4bad71b8.json", > "[2018-06-22 09:04:38,331] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:04:38,331] (heat-config) [DEBUG] ", > "[2018-06-22 09:04:38,331] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-22 09:04:38,332] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/715cba81-2c66-4002-8a4a-b49e4bad71b8.json < /var/lib/heat-config/deployed/715cba81-2c66-4002-8a4a-b49e4bad71b8.notify.json", > "[2018-06-22 09:04:38,765] (heat-config) [INFO] ", > "[2018-06-22 09:04:38,765] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:04:38,851 p=21516 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesDeployment] ************** >2018-06-22 09:04:38,864 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:38,885 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:04:38,937 p=21516 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "7b44e291-6842-4e34-b4b9-8ff041f059e6"}, "changed": false} >2018-06-22 09:04:38,959 p=21516 u=mistral | TASK [Render deployment file for ControllerAllNodesValidationDeployment] ******* >2018-06-22 09:04:39,574 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "45e953e264b7bed00914cc8acef6b02862222daa", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesValidationDeployment-7b44e291-6842-4e34-b4b9-8ff041f059e6", "gid": 0, "group": "root", "md5sum": "c7eec63068c96007f11b5f787036053a", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4940, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672679.01-143071894684794/source", "state": "file", "uid": 0} >2018-06-22 09:04:39,595 p=21516 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesValidationDeployment] *** >2018-06-22 09:04:39,929 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:04:39,954 p=21516 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesValidationDeployment] *** >2018-06-22 09:04:39,971 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:39,994 p=21516 u=mistral | TASK [Remove deployed file for ControllerAllNodesValidationDeployment when previous deployment failed] *** >2018-06-22 09:04:40,012 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:40,034 p=21516 u=mistral | TASK [Force remove deployed file for ControllerAllNodesValidationDeployment] *** >2018-06-22 09:04:40,050 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:40,071 p=21516 u=mistral | TASK [Run deployment ControllerAllNodesValidationDeployment] ******************* >2018-06-22 09:04:41,579 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.notify.json)", "delta": "0:00:01.157028", "end": "2018-06-22 09:04:41.581359", "rc": 0, "start": "2018-06-22 09:04:40.424331", "stderr": "[2018-06-22 09:04:40,447] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.json\n[2018-06-22 09:04:41,157] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\\nPing to 172.17.4.17 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:04:41,158] (heat-config) [DEBUG] [2018-06-22 09:04:40,467] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8\n[2018-06-22 09:04:40,468] (heat-config) [INFO] validate_fqdn=False\n[2018-06-22 09:04:40,468] (heat-config) [INFO] validate_ntp=True\n[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307\n[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-dkkb7eagalme-0-rq7gh364aglr/f7a544e2-8dcb-457c-9107-92464db5616d\n[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:04:40,468] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/7b44e291-6842-4e34-b4b9-8ff041f059e6\n[2018-06-22 09:04:41,153] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\nPing to 10.0.0.104 succeeded.\nSUCCESS\nTrying to ping 172.17.1.16 for local network 172.17.1.0/24.\nPing to 172.17.1.16 succeeded.\nSUCCESS\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\nPing to 172.17.2.15 succeeded.\nSUCCESS\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\nPing to 172.17.3.18 succeeded.\nSUCCESS\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\nPing to 172.17.4.17 succeeded.\nSUCCESS\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\nPing to 192.168.24.8 succeeded.\nSUCCESS\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-06-22 09:04:41,153] (heat-config) [DEBUG] \n[2018-06-22 09:04:41,153] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/7b44e291-6842-4e34-b4b9-8ff041f059e6\n\n[2018-06-22 09:04:41,158] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:04:41,158] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.json < /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.notify.json\n[2018-06-22 09:04:41,574] (heat-config) [INFO] \n[2018-06-22 09:04:41,575] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:04:40,447] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.json", "[2018-06-22 09:04:41,157] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\\nPing to 172.17.4.17 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:04:41,158] (heat-config) [DEBUG] [2018-06-22 09:04:40,467] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8", "[2018-06-22 09:04:40,468] (heat-config) [INFO] validate_fqdn=False", "[2018-06-22 09:04:40,468] (heat-config) [INFO] validate_ntp=True", "[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", "[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-dkkb7eagalme-0-rq7gh364aglr/f7a544e2-8dcb-457c-9107-92464db5616d", "[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:04:40,468] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/7b44e291-6842-4e34-b4b9-8ff041f059e6", "[2018-06-22 09:04:41,153] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.", "Ping to 10.0.0.104 succeeded.", "SUCCESS", "Trying to ping 172.17.1.16 for local network 172.17.1.0/24.", "Ping to 172.17.1.16 succeeded.", "SUCCESS", "Trying to ping 172.17.2.15 for local network 172.17.2.0/24.", "Ping to 172.17.2.15 succeeded.", "SUCCESS", "Trying to ping 172.17.3.18 for local network 172.17.3.0/24.", "Ping to 172.17.3.18 succeeded.", "SUCCESS", "Trying to ping 172.17.4.17 for local network 172.17.4.0/24.", "Ping to 172.17.4.17 succeeded.", "SUCCESS", "Trying to ping 192.168.24.8 for local network 192.168.24.0/24.", "Ping to 192.168.24.8 succeeded.", "SUCCESS", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-06-22 09:04:41,153] (heat-config) [DEBUG] ", "[2018-06-22 09:04:41,153] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/7b44e291-6842-4e34-b4b9-8ff041f059e6", "", "[2018-06-22 09:04:41,158] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:04:41,158] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.json < /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.notify.json", "[2018-06-22 09:04:41,574] (heat-config) [INFO] ", "[2018-06-22 09:04:41,575] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:04:41,603 p=21516 u=mistral | TASK [Output for ControllerAllNodesValidationDeployment] *********************** >2018-06-22 09:04:41,648 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:04:40,447] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.json", > "[2018-06-22 09:04:41,157] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\\nPing to 172.17.4.17 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:04:41,158] (heat-config) [DEBUG] [2018-06-22 09:04:40,467] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8", > "[2018-06-22 09:04:40,468] (heat-config) [INFO] validate_fqdn=False", > "[2018-06-22 09:04:40,468] (heat-config) [INFO] validate_ntp=True", > "[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", > "[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-dkkb7eagalme-0-rq7gh364aglr/f7a544e2-8dcb-457c-9107-92464db5616d", > "[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:04:40,468] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:04:40,468] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/7b44e291-6842-4e34-b4b9-8ff041f059e6", > "[2018-06-22 09:04:41,153] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.", > "Ping to 10.0.0.104 succeeded.", > "SUCCESS", > "Trying to ping 172.17.1.16 for local network 172.17.1.0/24.", > "Ping to 172.17.1.16 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.15 for local network 172.17.2.0/24.", > "Ping to 172.17.2.15 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.18 for local network 172.17.3.0/24.", > "Ping to 172.17.3.18 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.17 for local network 172.17.4.0/24.", > "Ping to 172.17.4.17 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.8 for local network 192.168.24.0/24.", > "Ping to 192.168.24.8 succeeded.", > "SUCCESS", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-06-22 09:04:41,153] (heat-config) [DEBUG] ", > "[2018-06-22 09:04:41,153] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/7b44e291-6842-4e34-b4b9-8ff041f059e6", > "", > "[2018-06-22 09:04:41,158] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:04:41,158] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.json < /var/lib/heat-config/deployed/7b44e291-6842-4e34-b4b9-8ff041f059e6.notify.json", > "[2018-06-22 09:04:41,574] (heat-config) [INFO] ", > "[2018-06-22 09:04:41,575] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:04:41,671 p=21516 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesValidationDeployment] **** >2018-06-22 09:04:41,685 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:41,704 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:04:41,794 p=21516 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "fa6e2ac8-f729-44b7-bffa-bd0a40a6403c"}, "changed": false} >2018-06-22 09:04:41,816 p=21516 u=mistral | TASK [Render deployment file for ControllerHostPrepDeployment] ***************** >2018-06-22 09:04:42,473 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "ded7fe2538da9129c456f951c4dfcc647398427f", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostPrepDeployment-fa6e2ac8-f729-44b7-bffa-bd0a40a6403c", "gid": 0, "group": "root", "md5sum": "8874b856f10b7e9057cf3fe5dbd5ecd0", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 45397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672681.91-50669033408838/source", "state": "file", "uid": 0} >2018-06-22 09:04:42,497 p=21516 u=mistral | TASK [Check if deployed file exists for ControllerHostPrepDeployment] ********** >2018-06-22 09:04:42,828 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:04:42,850 p=21516 u=mistral | TASK [Check previous deployment rc for ControllerHostPrepDeployment] *********** >2018-06-22 09:04:42,867 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:42,889 p=21516 u=mistral | TASK [Remove deployed file for ControllerHostPrepDeployment when previous deployment failed] *** >2018-06-22 09:04:42,905 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:42,927 p=21516 u=mistral | TASK [Force remove deployed file for ControllerHostPrepDeployment] ************* >2018-06-22 09:04:42,942 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:04:42,963 p=21516 u=mistral | TASK [Run deployment ControllerHostPrepDeployment] ***************************** >2018-06-22 09:05:05,213 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.notify.json)", "delta": "0:00:21.906248", "end": "2018-06-22 09:05:05.209222", "rc": 0, "start": "2018-06-22 09:04:43.302974", "stderr": "[2018-06-22 09:04:43,328] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.json\n[2018-06-22 09:05:04,795] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:05:04,795] (heat-config) [DEBUG] [2018-06-22 09:04:43,350] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_variables.json\n[2018-06-22 09:05:04,791] (heat-config) [INFO] Return code 0\n[2018-06-22 09:05:04,791] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/aodh)\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\n\nTASK [aodh logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [ceilometer logs readme] **************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/cinder)\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\n\nTASK [cinder logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/cinder)\nok: [localhost] => (item=/var/log/containers/cinder)\n\nTASK [ensure ceph configurations exist] ****************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/log/containers/cinder)\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/log/containers/cinder)\nok: [localhost] => (item=/var/lib/cinder)\n\nTASK [cinder_enable_iscsi_backend fact] ****************************************\nok: [localhost]\n\nTASK [cinder create LVM volume group dd] ***************************************\nskipping: [localhost]\n\nTASK [cinder create LVM volume group] ******************************************\nskipping: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/glance)\n\nTASK [glance logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}\n...ignoring\n\nTASK [set_fact] ****************************************************************\nskipping: [localhost]\n\nTASK [file] ********************************************************************\nskipping: [localhost]\n\nTASK [stat] ********************************************************************\nskipping: [localhost]\n\nTASK [copy] ********************************************************************\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \n\nTASK [mount] *******************************************************************\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \n\nTASK [Mount Node Staging Location] *********************************************\nskipping: [localhost]\n\nTASK [Mount NFS on host] *******************************************************\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\n\nTASK [gnocchi logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [get parameters] **********************************************************\nok: [localhost]\n\nTASK [get DeployedSSLCertificatePath attributes] *******************************\nskipping: [localhost]\n\nTASK [Assign bootstrap node] ***************************************************\nskipping: [localhost]\n\nTASK [set is_bootstrap_node fact] **********************************************\nskipping: [localhost]\n\nTASK [get haproxy status] ******************************************************\nskipping: [localhost]\n\nTASK [get pacemaker status] ****************************************************\nskipping: [localhost]\n\nTASK [get docker status] *******************************************************\nskipping: [localhost]\n\nTASK [get container_id] ********************************************************\nskipping: [localhost]\n\nTASK [get pcs resource name for haproxy container] *****************************\nskipping: [localhost]\n\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\nskipping: [localhost]\n\nTASK [push certificate content] ************************************************\nskipping: [localhost]\n\nTASK [set certificate ownership] ***********************************************\nskipping: [localhost]\n\nTASK [reload haproxy if enabled] ***********************************************\nskipping: [localhost]\n\nTASK [restart pacemaker resource for haproxy] **********************************\nskipping: [localhost]\n\nTASK [set kolla_dir fact] ******************************************************\nskipping: [localhost]\n\nTASK [set certificate group on host via container] *****************************\nskipping: [localhost]\n\nTASK [copy certificate from kolla directory to final location] *****************\nskipping: [localhost]\n\nTASK [send restart order to haproxy container] *********************************\nskipping: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/lib/haproxy)\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/heat)\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\n\nTASK [heat logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/heat)\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/horizon)\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\n\nTASK [horizon logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}\n...ignoring\n\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\nok: [localhost]\n\nTASK [Stop and disable iscsid.socket service] **********************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/keystone)\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\n\nTASK [keystone logs readme] ****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [memcached logs readme] ***************************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/log/containers/mysql)\nok: [localhost] => (item=/var/lib/mysql)\n\nTASK [mysql logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/neutron)\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\n\nTASK [neutron logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/neutron)\n\nTASK [create /var/lib/neutron] *************************************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/nova)\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\n\nTASK [nova logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nok: [localhost] => (item=/var/log/containers/nova)\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/panko)\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\n\nTASK [panko logs readme] *******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/rabbitmq)\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\n\nTASK [rabbitmq logs readme] ****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}\n...ignoring\n\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/var/lib/redis)\nchanged: [localhost] => (item=/var/log/containers/redis)\nok: [localhost] => (item=/var/run/redis)\n\nTASK [redis logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [create /var/lib/sahara] **************************************************\nchanged: [localhost]\n\nTASK [create persistent sahara logs directory] *********************************\nchanged: [localhost]\n\nTASK [sahara logs readme] ******************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}\n...ignoring\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/srv/node)\nchanged: [localhost] => (item=/var/log/swift)\n\nTASK [Create swift logging symlink] ********************************************\nchanged: [localhost]\n\nTASK [create persistent directories] *******************************************\nok: [localhost] => (item=/srv/node)\nok: [localhost] => (item=/var/log/swift)\nok: [localhost] => (item=/var/log/containers)\n\nTASK [Set swift_use_local_disks fact] ******************************************\nok: [localhost]\n\nTASK [Create Swift d1 directory if needed] *************************************\nchanged: [localhost]\n\nTASK [swift logs readme] *******************************************************\nchanged: [localhost]\n\nTASK [Format SwiftRawDisks] ****************************************************\n\nTASK [Mount devices defined in SwiftRawDisks] **********************************\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \n\n\n[2018-06-22 09:05:04,791] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_playbook.yaml\n\n[2018-06-22 09:05:04,795] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-06-22 09:05:04,796] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.json < /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.notify.json\n[2018-06-22 09:05:05,203] (heat-config) [INFO] \n[2018-06-22 09:05:05,203] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:04:43,328] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.json", "[2018-06-22 09:05:04,795] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:05:04,795] (heat-config) [DEBUG] [2018-06-22 09:04:43,350] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_variables.json", "[2018-06-22 09:05:04,791] (heat-config) [INFO] Return code 0", "[2018-06-22 09:05:04,791] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/aodh)", "changed: [localhost] => (item=/var/log/containers/httpd/aodh-api)", "", "TASK [aodh logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [ceilometer logs readme] **************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/cinder)", "changed: [localhost] => (item=/var/log/containers/httpd/cinder-api)", "", "TASK [cinder logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/cinder)", "ok: [localhost] => (item=/var/log/containers/cinder)", "", "TASK [ensure ceph configurations exist] ****************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/log/containers/cinder)", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/log/containers/cinder)", "ok: [localhost] => (item=/var/lib/cinder)", "", "TASK [cinder_enable_iscsi_backend fact] ****************************************", "ok: [localhost]", "", "TASK [cinder create LVM volume group dd] ***************************************", "skipping: [localhost]", "", "TASK [cinder create LVM volume group] ******************************************", "skipping: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/glance)", "", "TASK [glance logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}", "...ignoring", "", "TASK [set_fact] ****************************************************************", "skipping: [localhost]", "", "TASK [file] ********************************************************************", "skipping: [localhost]", "", "TASK [stat] ********************************************************************", "skipping: [localhost]", "", "TASK [copy] ********************************************************************", "skipping: [localhost] => (item={u'NETAPP_SHARE': u''}) ", "", "TASK [mount] *******************************************************************", "skipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) ", "", "TASK [Mount Node Staging Location] *********************************************", "skipping: [localhost]", "", "TASK [Mount NFS on host] *******************************************************", "skipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) ", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/gnocchi)", "changed: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)", "", "TASK [gnocchi logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [get parameters] **********************************************************", "ok: [localhost]", "", "TASK [get DeployedSSLCertificatePath attributes] *******************************", "skipping: [localhost]", "", "TASK [Assign bootstrap node] ***************************************************", "skipping: [localhost]", "", "TASK [set is_bootstrap_node fact] **********************************************", "skipping: [localhost]", "", "TASK [get haproxy status] ******************************************************", "skipping: [localhost]", "", "TASK [get pacemaker status] ****************************************************", "skipping: [localhost]", "", "TASK [get docker status] *******************************************************", "skipping: [localhost]", "", "TASK [get container_id] ********************************************************", "skipping: [localhost]", "", "TASK [get pcs resource name for haproxy container] *****************************", "skipping: [localhost]", "", "TASK [remove DeployedSSLCertificatePath if is dir] *****************************", "skipping: [localhost]", "", "TASK [push certificate content] ************************************************", "skipping: [localhost]", "", "TASK [set certificate ownership] ***********************************************", "skipping: [localhost]", "", "TASK [reload haproxy if enabled] ***********************************************", "skipping: [localhost]", "", "TASK [restart pacemaker resource for haproxy] **********************************", "skipping: [localhost]", "", "TASK [set kolla_dir fact] ******************************************************", "skipping: [localhost]", "", "TASK [set certificate group on host via container] *****************************", "skipping: [localhost]", "", "TASK [copy certificate from kolla directory to final location] *****************", "skipping: [localhost]", "", "TASK [send restart order to haproxy container] *********************************", "skipping: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/lib/haproxy)", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/heat)", "changed: [localhost] => (item=/var/log/containers/httpd/heat-api)", "", "TASK [heat logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/heat)", "changed: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/horizon)", "changed: [localhost] => (item=/var/log/containers/httpd/horizon)", "", "TASK [horizon logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}", "...ignoring", "", "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", "ok: [localhost]", "", "TASK [Stop and disable iscsid.socket service] **********************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/keystone)", "changed: [localhost] => (item=/var/log/containers/httpd/keystone)", "", "TASK [keystone logs readme] ****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [memcached logs readme] ***************************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/log/containers/mysql)", "ok: [localhost] => (item=/var/lib/mysql)", "", "TASK [mysql logs readme] *******************************************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/neutron)", "changed: [localhost] => (item=/var/log/containers/httpd/neutron-api)", "", "TASK [neutron logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/neutron)", "", "TASK [create /var/lib/neutron] *************************************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/nova)", "changed: [localhost] => (item=/var/log/containers/httpd/nova-api)", "", "TASK [nova logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "ok: [localhost] => (item=/var/log/containers/nova)", "changed: [localhost] => (item=/var/log/containers/httpd/nova-placement)", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/panko)", "changed: [localhost] => (item=/var/log/containers/httpd/panko-api)", "", "TASK [panko logs readme] *******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/rabbitmq)", "changed: [localhost] => (item=/var/log/containers/rabbitmq)", "", "TASK [rabbitmq logs readme] ****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}", "...ignoring", "", "TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/var/lib/redis)", "changed: [localhost] => (item=/var/log/containers/redis)", "ok: [localhost] => (item=/var/run/redis)", "", "TASK [redis logs readme] *******************************************************", "changed: [localhost]", "", "TASK [create /var/lib/sahara] **************************************************", "changed: [localhost]", "", "TASK [create persistent sahara logs directory] *********************************", "changed: [localhost]", "", "TASK [sahara logs readme] ******************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}", "...ignoring", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/srv/node)", "changed: [localhost] => (item=/var/log/swift)", "", "TASK [Create swift logging symlink] ********************************************", "changed: [localhost]", "", "TASK [create persistent directories] *******************************************", "ok: [localhost] => (item=/srv/node)", "ok: [localhost] => (item=/var/log/swift)", "ok: [localhost] => (item=/var/log/containers)", "", "TASK [Set swift_use_local_disks fact] ******************************************", "ok: [localhost]", "", "TASK [Create Swift d1 directory if needed] *************************************", "changed: [localhost]", "", "TASK [swift logs readme] *******************************************************", "changed: [localhost]", "", "TASK [Format SwiftRawDisks] ****************************************************", "", "TASK [Mount devices defined in SwiftRawDisks] **********************************", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=60 changed=33 unreachable=0 failed=0 ", "", "", "[2018-06-22 09:05:04,791] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_playbook.yaml", "", "[2018-06-22 09:05:04,795] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-06-22 09:05:04,796] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.json < /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.notify.json", "[2018-06-22 09:05:05,203] (heat-config) [INFO] ", "[2018-06-22 09:05:05,203] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:05:05,238 p=21516 u=mistral | TASK [Output for ControllerHostPrepDeployment] ********************************* >2018-06-22 09:05:05,339 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:04:43,328] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.json", > "[2018-06-22 09:05:04,795] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/aodh)\\nchanged: [localhost] => (item=/var/log/containers/httpd/aodh-api)\\n\\nTASK [aodh logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b6cf6dbe054f430c33d39c1a1a88593536d6e659\\\", \\\"msg\\\": \\\"Destination directory /var/log/aodh does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/cinder)\\nchanged: [localhost] => (item=/var/log/containers/httpd/cinder-api)\\n\\nTASK [cinder logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\\\", \\\"msg\\\": \\\"Destination directory /var/log/cinder does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/cinder)\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/log/containers/cinder)\\nok: [localhost] => (item=/var/lib/cinder)\\n\\nTASK [cinder_enable_iscsi_backend fact] ****************************************\\nok: [localhost]\\n\\nTASK [cinder create LVM volume group dd] ***************************************\\nskipping: [localhost]\\n\\nTASK [cinder create LVM volume group] ******************************************\\nskipping: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/glance)\\n\\nTASK [glance logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"e368ae3272baeb19e1113009ea5dae00e797c919\\\", \\\"msg\\\": \\\"Destination directory /var/log/glance does not exist\\\"}\\n...ignoring\\n\\nTASK [set_fact] ****************************************************************\\nskipping: [localhost]\\n\\nTASK [file] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [stat] ********************************************************************\\nskipping: [localhost]\\n\\nTASK [copy] ********************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u''}) \\n\\nTASK [mount] *******************************************************************\\nskipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) \\n\\nTASK [Mount Node Staging Location] *********************************************\\nskipping: [localhost]\\n\\nTASK [Mount NFS on host] *******************************************************\\nskipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) \\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/gnocchi)\\nchanged: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)\\n\\nTASK [gnocchi logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\\\", \\\"msg\\\": \\\"Destination directory /var/log/gnocchi does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [get parameters] **********************************************************\\nok: [localhost]\\n\\nTASK [get DeployedSSLCertificatePath attributes] *******************************\\nskipping: [localhost]\\n\\nTASK [Assign bootstrap node] ***************************************************\\nskipping: [localhost]\\n\\nTASK [set is_bootstrap_node fact] **********************************************\\nskipping: [localhost]\\n\\nTASK [get haproxy status] ******************************************************\\nskipping: [localhost]\\n\\nTASK [get pacemaker status] ****************************************************\\nskipping: [localhost]\\n\\nTASK [get docker status] *******************************************************\\nskipping: [localhost]\\n\\nTASK [get container_id] ********************************************************\\nskipping: [localhost]\\n\\nTASK [get pcs resource name for haproxy container] *****************************\\nskipping: [localhost]\\n\\nTASK [remove DeployedSSLCertificatePath if is dir] *****************************\\nskipping: [localhost]\\n\\nTASK [push certificate content] ************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate ownership] ***********************************************\\nskipping: [localhost]\\n\\nTASK [reload haproxy if enabled] ***********************************************\\nskipping: [localhost]\\n\\nTASK [restart pacemaker resource for haproxy] **********************************\\nskipping: [localhost]\\n\\nTASK [set kolla_dir fact] ******************************************************\\nskipping: [localhost]\\n\\nTASK [set certificate group on host via container] *****************************\\nskipping: [localhost]\\n\\nTASK [copy certificate from kolla directory to final location] *****************\\nskipping: [localhost]\\n\\nTASK [send restart order to haproxy container] *********************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/haproxy)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api)\\n\\nTASK [heat logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"d30ca3bda176434d31659e7379616dd162ddb246\\\", \\\"msg\\\": \\\"Destination directory /var/log/heat does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/heat)\\nchanged: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/horizon)\\nchanged: [localhost] => (item=/var/log/containers/httpd/horizon)\\n\\nTASK [horizon logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ac324739761cb36b925d6e309482e26f7fe49b91\\\", \\\"msg\\\": \\\"Destination directory /var/log/horizon does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/keystone)\\nchanged: [localhost] => (item=/var/log/containers/httpd/keystone)\\n\\nTASK [keystone logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"910be882addb6df99267e9bd303f6d9bf658562e\\\", \\\"msg\\\": \\\"Destination directory /var/log/keystone does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [memcached logs readme] ***************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/log/containers/mysql)\\nok: [localhost] => (item=/var/lib/mysql)\\n\\nTASK [mysql logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\nchanged: [localhost] => (item=/var/log/containers/httpd/neutron-api)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [create /var/lib/neutron] *************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-api)\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nok: [localhost] => (item=/var/log/containers/nova)\\nchanged: [localhost] => (item=/var/log/containers/httpd/nova-placement)\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/panko)\\nchanged: [localhost] => (item=/var/log/containers/httpd/panko-api)\\n\\nTASK [panko logs readme] *******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"903397bbd82e9b1f53087e3d7e8975d851857ce2\\\", \\\"msg\\\": \\\"Destination directory /var/log/panko does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/rabbitmq)\\nchanged: [localhost] => (item=/var/log/containers/rabbitmq)\\n\\nTASK [rabbitmq logs readme] ****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ee241f2199f264c9d0f384cf389fe255e8bf8a77\\\", \\\"msg\\\": \\\"Destination directory /var/log/rabbitmq does not exist\\\"}\\n...ignoring\\n\\nTASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/var/lib/redis)\\nchanged: [localhost] => (item=/var/log/containers/redis)\\nok: [localhost] => (item=/var/run/redis)\\n\\nTASK [redis logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [create /var/lib/sahara] **************************************************\\nchanged: [localhost]\\n\\nTASK [create persistent sahara logs directory] *********************************\\nchanged: [localhost]\\n\\nTASK [sahara logs readme] ******************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"b0212a1177fa4a88502d17a1cbc31198040cf047\\\", \\\"msg\\\": \\\"Destination directory /var/log/sahara does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/srv/node)\\nchanged: [localhost] => (item=/var/log/swift)\\n\\nTASK [Create swift logging symlink] ********************************************\\nchanged: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nok: [localhost] => (item=/srv/node)\\nok: [localhost] => (item=/var/log/swift)\\nok: [localhost] => (item=/var/log/containers)\\n\\nTASK [Set swift_use_local_disks fact] ******************************************\\nok: [localhost]\\n\\nTASK [Create Swift d1 directory if needed] *************************************\\nchanged: [localhost]\\n\\nTASK [swift logs readme] *******************************************************\\nchanged: [localhost]\\n\\nTASK [Format SwiftRawDisks] ****************************************************\\n\\nTASK [Mount devices defined in SwiftRawDisks] **********************************\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=60 changed=33 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:05:04,795] (heat-config) [DEBUG] [2018-06-22 09:04:43,350] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_variables.json", > "[2018-06-22 09:05:04,791] (heat-config) [INFO] Return code 0", > "[2018-06-22 09:05:04,791] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/aodh)", > "changed: [localhost] => (item=/var/log/containers/httpd/aodh-api)", > "", > "TASK [aodh logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b6cf6dbe054f430c33d39c1a1a88593536d6e659\", \"msg\": \"Destination directory /var/log/aodh does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [ceilometer logs readme] **************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/cinder)", > "changed: [localhost] => (item=/var/log/containers/httpd/cinder-api)", > "", > "TASK [cinder logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292\", \"msg\": \"Destination directory /var/log/cinder does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/cinder)", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "", > "TASK [ensure ceph configurations exist] ****************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/log/containers/cinder)", > "ok: [localhost] => (item=/var/lib/cinder)", > "", > "TASK [cinder_enable_iscsi_backend fact] ****************************************", > "ok: [localhost]", > "", > "TASK [cinder create LVM volume group dd] ***************************************", > "skipping: [localhost]", > "", > "TASK [cinder create LVM volume group] ******************************************", > "skipping: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/glance)", > "", > "TASK [glance logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"e368ae3272baeb19e1113009ea5dae00e797c919\", \"msg\": \"Destination directory /var/log/glance does not exist\"}", > "...ignoring", > "", > "TASK [set_fact] ****************************************************************", > "skipping: [localhost]", > "", > "TASK [file] ********************************************************************", > "skipping: [localhost]", > "", > "TASK [stat] ********************************************************************", > "skipping: [localhost]", > "", > "TASK [copy] ********************************************************************", > "skipping: [localhost] => (item={u'NETAPP_SHARE': u''}) ", > "", > "TASK [mount] *******************************************************************", > "skipping: [localhost] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) ", > "", > "TASK [Mount Node Staging Location] *********************************************", > "skipping: [localhost]", > "", > "TASK [Mount NFS on host] *******************************************************", > "skipping: [localhost] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) ", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/gnocchi)", > "changed: [localhost] => (item=/var/log/containers/httpd/gnocchi-api)", > "", > "TASK [gnocchi logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"2f6114e0f135d7222e70a07579ab0b2b6f967ff8\", \"msg\": \"Destination directory /var/log/gnocchi does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [get parameters] **********************************************************", > "ok: [localhost]", > "", > "TASK [get DeployedSSLCertificatePath attributes] *******************************", > "skipping: [localhost]", > "", > "TASK [Assign bootstrap node] ***************************************************", > "skipping: [localhost]", > "", > "TASK [set is_bootstrap_node fact] **********************************************", > "skipping: [localhost]", > "", > "TASK [get haproxy status] ******************************************************", > "skipping: [localhost]", > "", > "TASK [get pacemaker status] ****************************************************", > "skipping: [localhost]", > "", > "TASK [get docker status] *******************************************************", > "skipping: [localhost]", > "", > "TASK [get container_id] ********************************************************", > "skipping: [localhost]", > "", > "TASK [get pcs resource name for haproxy container] *****************************", > "skipping: [localhost]", > "", > "TASK [remove DeployedSSLCertificatePath if is dir] *****************************", > "skipping: [localhost]", > "", > "TASK [push certificate content] ************************************************", > "skipping: [localhost]", > "", > "TASK [set certificate ownership] ***********************************************", > "skipping: [localhost]", > "", > "TASK [reload haproxy if enabled] ***********************************************", > "skipping: [localhost]", > "", > "TASK [restart pacemaker resource for haproxy] **********************************", > "skipping: [localhost]", > "", > "TASK [set kolla_dir fact] ******************************************************", > "skipping: [localhost]", > "", > "TASK [set certificate group on host via container] *****************************", > "skipping: [localhost]", > "", > "TASK [copy certificate from kolla directory to final location] *****************", > "skipping: [localhost]", > "", > "TASK [send restart order to haproxy container] *********************************", > "skipping: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/lib/haproxy)", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/heat)", > "changed: [localhost] => (item=/var/log/containers/httpd/heat-api)", > "", > "TASK [heat logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"d30ca3bda176434d31659e7379616dd162ddb246\", \"msg\": \"Destination directory /var/log/heat does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/heat)", > "changed: [localhost] => (item=/var/log/containers/httpd/heat-api-cfn)", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/horizon)", > "changed: [localhost] => (item=/var/log/containers/httpd/horizon)", > "", > "TASK [horizon logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ac324739761cb36b925d6e309482e26f7fe49b91\", \"msg\": \"Destination directory /var/log/horizon does not exist\"}", > "...ignoring", > "", > "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", > "ok: [localhost]", > "", > "TASK [Stop and disable iscsid.socket service] **********************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/keystone)", > "changed: [localhost] => (item=/var/log/containers/httpd/keystone)", > "", > "TASK [keystone logs readme] ****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"910be882addb6df99267e9bd303f6d9bf658562e\", \"msg\": \"Destination directory /var/log/keystone does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [memcached logs readme] ***************************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/log/containers/mysql)", > "ok: [localhost] => (item=/var/lib/mysql)", > "", > "TASK [mysql logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/neutron)", > "changed: [localhost] => (item=/var/log/containers/httpd/neutron-api)", > "", > "TASK [neutron logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/neutron)", > "", > "TASK [create /var/lib/neutron] *************************************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/nova)", > "changed: [localhost] => (item=/var/log/containers/httpd/nova-api)", > "", > "TASK [nova logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "ok: [localhost] => (item=/var/log/containers/nova)", > "changed: [localhost] => (item=/var/log/containers/httpd/nova-placement)", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/panko)", > "changed: [localhost] => (item=/var/log/containers/httpd/panko-api)", > "", > "TASK [panko logs readme] *******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"903397bbd82e9b1f53087e3d7e8975d851857ce2\", \"msg\": \"Destination directory /var/log/panko does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/rabbitmq)", > "changed: [localhost] => (item=/var/log/containers/rabbitmq)", > "", > "TASK [rabbitmq logs readme] ****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ee241f2199f264c9d0f384cf389fe255e8bf8a77\", \"msg\": \"Destination directory /var/log/rabbitmq does not exist\"}", > "...ignoring", > "", > "TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] ***", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/var/lib/redis)", > "changed: [localhost] => (item=/var/log/containers/redis)", > "ok: [localhost] => (item=/var/run/redis)", > "", > "TASK [redis logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [create /var/lib/sahara] **************************************************", > "changed: [localhost]", > "", > "TASK [create persistent sahara logs directory] *********************************", > "changed: [localhost]", > "", > "TASK [sahara logs readme] ******************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"b0212a1177fa4a88502d17a1cbc31198040cf047\", \"msg\": \"Destination directory /var/log/sahara does not exist\"}", > "...ignoring", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/srv/node)", > "changed: [localhost] => (item=/var/log/swift)", > "", > "TASK [Create swift logging symlink] ********************************************", > "changed: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "ok: [localhost] => (item=/srv/node)", > "ok: [localhost] => (item=/var/log/swift)", > "ok: [localhost] => (item=/var/log/containers)", > "", > "TASK [Set swift_use_local_disks fact] ******************************************", > "ok: [localhost]", > "", > "TASK [Create Swift d1 directory if needed] *************************************", > "changed: [localhost]", > "", > "TASK [swift logs readme] *******************************************************", > "changed: [localhost]", > "", > "TASK [Format SwiftRawDisks] ****************************************************", > "", > "TASK [Mount devices defined in SwiftRawDisks] **********************************", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=60 changed=33 unreachable=0 failed=0 ", > "", > "", > "[2018-06-22 09:05:04,791] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c_playbook.yaml", > "", > "[2018-06-22 09:05:04,795] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-06-22 09:05:04,796] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.json < /var/lib/heat-config/deployed/fa6e2ac8-f729-44b7-bffa-bd0a40a6403c.notify.json", > "[2018-06-22 09:05:05,203] (heat-config) [INFO] ", > "[2018-06-22 09:05:05,203] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:05:05,364 p=21516 u=mistral | TASK [Check-mode for Run deployment ControllerHostPrepDeployment] ************** >2018-06-22 09:05:05,381 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:05,401 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:05:05,493 p=21516 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "e9b6658c-6510-4121-b194-f3e7cec48261"}, "changed": false} >2018-06-22 09:05:05,518 p=21516 u=mistral | TASK [Render deployment file for ControllerArtifactsDeploy] ******************** >2018-06-22 09:05:06,201 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "0072b646effb6bfca43b18c6b3dc2346d26370f9", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerArtifactsDeploy-e9b6658c-6510-4121-b194-f3e7cec48261", "gid": 0, "group": "root", "md5sum": "480f3b706e20a3e602a94b1e9bf87050", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2021, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672705.62-13596499400968/source", "state": "file", "uid": 0} >2018-06-22 09:05:06,223 p=21516 u=mistral | TASK [Check if deployed file exists for ControllerArtifactsDeploy] ************* >2018-06-22 09:05:06,594 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:05:06,618 p=21516 u=mistral | TASK [Check previous deployment rc for ControllerArtifactsDeploy] ************** >2018-06-22 09:05:06,634 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:06,688 p=21516 u=mistral | TASK [Remove deployed file for ControllerArtifactsDeploy when previous deployment failed] *** >2018-06-22 09:05:06,707 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:06,728 p=21516 u=mistral | TASK [Force remove deployed file for ControllerArtifactsDeploy] **************** >2018-06-22 09:05:06,750 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:06,772 p=21516 u=mistral | TASK [Run deployment ControllerArtifactsDeploy] ******************************** >2018-06-22 09:05:07,578 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.notify.json)", "delta": "0:00:00.466733", "end": "2018-06-22 09:05:07.589786", "rc": 0, "start": "2018-06-22 09:05:07.123053", "stderr": "[2018-06-22 09:05:07,148] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.json\n[2018-06-22 09:05:07,178] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:05:07,179] (heat-config) [DEBUG] [2018-06-22 09:05:07,169] (heat-config) [INFO] artifact_urls=\n[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307\n[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-ControllerArtifactsDeploy-q53opecfia5y-0-5xvdvzojviyc/25608cf8-1b6a-4839-9d2c-8516e628ad25\n[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:05:07,169] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/e9b6658c-6510-4121-b194-f3e7cec48261\n[2018-06-22 09:05:07,175] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-06-22 09:05:07,175] (heat-config) [DEBUG] \n[2018-06-22 09:05:07,175] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/e9b6658c-6510-4121-b194-f3e7cec48261\n\n[2018-06-22 09:05:07,179] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:05:07,179] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.json < /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.notify.json\n[2018-06-22 09:05:07,583] (heat-config) [INFO] \n[2018-06-22 09:05:07,584] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:05:07,148] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.json", "[2018-06-22 09:05:07,178] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:05:07,179] (heat-config) [DEBUG] [2018-06-22 09:05:07,169] (heat-config) [INFO] artifact_urls=", "[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", "[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-ControllerArtifactsDeploy-q53opecfia5y-0-5xvdvzojviyc/25608cf8-1b6a-4839-9d2c-8516e628ad25", "[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:05:07,169] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/e9b6658c-6510-4121-b194-f3e7cec48261", "[2018-06-22 09:05:07,175] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-06-22 09:05:07,175] (heat-config) [DEBUG] ", "[2018-06-22 09:05:07,175] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/e9b6658c-6510-4121-b194-f3e7cec48261", "", "[2018-06-22 09:05:07,179] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:05:07,179] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.json < /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.notify.json", "[2018-06-22 09:05:07,583] (heat-config) [INFO] ", "[2018-06-22 09:05:07,584] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:05:07,600 p=21516 u=mistral | TASK [Output for ControllerArtifactsDeploy] ************************************ >2018-06-22 09:05:07,653 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:05:07,148] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.json", > "[2018-06-22 09:05:07,178] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:05:07,179] (heat-config) [DEBUG] [2018-06-22 09:05:07,169] (heat-config) [INFO] artifact_urls=", > "[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_server_id=90f67518-2ffc-4ccd-bde0-bdb36b720307", > "[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-ControllerArtifactsDeploy-q53opecfia5y-0-5xvdvzojviyc/25608cf8-1b6a-4839-9d2c-8516e628ad25", > "[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:05:07,169] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:05:07,169] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/e9b6658c-6510-4121-b194-f3e7cec48261", > "[2018-06-22 09:05:07,175] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-06-22 09:05:07,175] (heat-config) [DEBUG] ", > "[2018-06-22 09:05:07,175] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/e9b6658c-6510-4121-b194-f3e7cec48261", > "", > "[2018-06-22 09:05:07,179] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:05:07,179] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.json < /var/lib/heat-config/deployed/e9b6658c-6510-4121-b194-f3e7cec48261.notify.json", > "[2018-06-22 09:05:07,583] (heat-config) [INFO] ", > "[2018-06-22 09:05:07,584] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:05:07,674 p=21516 u=mistral | TASK [Check-mode for Run deployment ControllerArtifactsDeploy] ***************** >2018-06-22 09:05:07,688 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:07,710 p=21516 u=mistral | TASK [include] ***************************************************************** >2018-06-22 09:05:07,923 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Compute/deployments.yaml for compute-0 >2018-06-22 09:05:07,934 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Compute/deployments.yaml for compute-0 >2018-06-22 09:05:07,942 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Compute/deployments.yaml for compute-0 >2018-06-22 09:05:07,949 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Compute/deployments.yaml for compute-0 >2018-06-22 09:05:07,957 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Compute/deployments.yaml for compute-0 >2018-06-22 09:05:07,964 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Compute/deployments.yaml for compute-0 >2018-06-22 09:05:07,972 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Compute/deployments.yaml for compute-0 >2018-06-22 09:05:07,979 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/Compute/deployments.yaml for compute-0 >2018-06-22 09:05:08,017 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:05:08,080 p=21516 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "47e3bb7e-dbd0-432c-b417-77caf844175a"}, "changed": false} >2018-06-22 09:05:08,097 p=21516 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-06-22 09:05:08,769 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "4fe761dd16a6c1744ee5885e1b08177e41e671e8", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-47e3bb7e-dbd0-432c-b417-77caf844175a", "gid": 0, "group": "root", "md5sum": "2524e9d107bcf31bbe44a7f9f33a92e9", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9259, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672708.16-48005415887987/source", "state": "file", "uid": 0} >2018-06-22 09:05:08,788 p=21516 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-06-22 09:05:09,118 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:05:09,138 p=21516 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-06-22 09:05:09,155 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:09,174 p=21516 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-06-22 09:05:09,192 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:09,210 p=21516 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-06-22 09:05:09,233 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:09,252 p=21516 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-06-22 09:05:29,337 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.notify.json)", "delta": "0:00:19.742567", "end": "2018-06-22 09:05:29.331917", "rc": 0, "start": "2018-06-22 09:05:09.589350", "stderr": "[2018-06-22 09:05:09,614] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.json\n[2018-06-22 09:05:28,906] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.15/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.15/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 09:05:10 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 09:05:10 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 09:05:10 AM] [INFO] Not using any mapping file.\\n[2018/06/22 09:05:10 AM] [INFO] Finding active nics\\n[2018/06/22 09:05:10 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 09:05:10 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 09:05:10 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 09:05:10 AM] [INFO] lo is not an active nic\\n[2018/06/22 09:05:10 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 09:05:10 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 09:05:10 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 09:05:10 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 09:05:10 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 09:05:10 AM] [INFO] adding interface: eth0\\n[2018/06/22 09:05:10 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 09:05:10 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] adding interface: eth1\\n[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan20\\n[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan50\\n[2018/06/22 09:05:10 AM] [INFO] adding interface: eth2\\n[2018/06/22 09:05:10 AM] [INFO] applying network configs...\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 09:05:10 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] running ifup on interface: eth2\\n[2018/06/22 09:05:11 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 09:05:11 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 09:05:15 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 09:05:19 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:05:23 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-06-22 09:05:28,907] (heat-config) [DEBUG] [2018-06-22 09:05:09,636] (heat-config) [INFO] interface_name=nic1\n[2018-06-22 09:05:09,636] (heat-config) [INFO] bridge_name=br-ex\n[2018-06-22 09:05:09,636] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019\n[2018-06-22 09:05:09,637] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:05:09,637] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-khdfkn36yqgs-0-dpxsps5qjksx-NetworkDeployment-yle4twzvdnzi-TripleOSoftwareDeployment-smqvuunztcz6/abe8a6bc-c9a0-4460-a3ad-bf6b049b1eb3\n[2018-06-22 09:05:09,637] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:05:09,637] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:05:09,637] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/47e3bb7e-dbd0-432c-b417-77caf844175a\n[2018-06-22 09:05:28,903] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-06-22 09:05:28,903] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.15/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.15/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/06/22 09:05:10 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/06/22 09:05:10 AM] [INFO] Ifcfg net config provider created.\n[2018/06/22 09:05:10 AM] [INFO] Not using any mapping file.\n[2018/06/22 09:05:10 AM] [INFO] Finding active nics\n[2018/06/22 09:05:10 AM] [INFO] eth2 is an embedded active nic\n[2018/06/22 09:05:10 AM] [INFO] eth1 is an embedded active nic\n[2018/06/22 09:05:10 AM] [INFO] eth0 is an embedded active nic\n[2018/06/22 09:05:10 AM] [INFO] lo is not an active nic\n[2018/06/22 09:05:10 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/06/22 09:05:10 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/06/22 09:05:10 AM] [INFO] nic3 mapped to: eth2\n[2018/06/22 09:05:10 AM] [INFO] nic2 mapped to: eth1\n[2018/06/22 09:05:10 AM] [INFO] nic1 mapped to: eth0\n[2018/06/22 09:05:10 AM] [INFO] adding interface: eth0\n[2018/06/22 09:05:10 AM] [INFO] adding custom route for interface: eth0\n[2018/06/22 09:05:10 AM] [INFO] adding bridge: br-isolated\n[2018/06/22 09:05:10 AM] [INFO] adding interface: eth1\n[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan20\n[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan30\n[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan50\n[2018/06/22 09:05:10 AM] [INFO] adding interface: eth2\n[2018/06/22 09:05:10 AM] [INFO] applying network configs...\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth2\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth1\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth0\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan20\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan50\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/06/22 09:05:10 AM] [INFO] running ifup on bridge: br-isolated\n[2018/06/22 09:05:10 AM] [INFO] running ifup on interface: eth2\n[2018/06/22 09:05:11 AM] [INFO] running ifup on interface: eth1\n[2018/06/22 09:05:11 AM] [INFO] running ifup on interface: eth0\n[2018/06/22 09:05:15 AM] [INFO] running ifup on interface: vlan20\n[2018/06/22 09:05:19 AM] [INFO] running ifup on interface: vlan30\n[2018/06/22 09:05:23 AM] [INFO] running ifup on interface: vlan50\n[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan20\n[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan30\n[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-06-22 09:05:28,903] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/47e3bb7e-dbd0-432c-b417-77caf844175a\n\n[2018-06-22 09:05:28,907] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:05:28,908] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.json < /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.notify.json\n[2018-06-22 09:05:29,324] (heat-config) [INFO] \n[2018-06-22 09:05:29,325] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:05:09,614] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.json", "[2018-06-22 09:05:28,906] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.15/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.15/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 09:05:10 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 09:05:10 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 09:05:10 AM] [INFO] Not using any mapping file.\\n[2018/06/22 09:05:10 AM] [INFO] Finding active nics\\n[2018/06/22 09:05:10 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 09:05:10 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 09:05:10 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 09:05:10 AM] [INFO] lo is not an active nic\\n[2018/06/22 09:05:10 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 09:05:10 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 09:05:10 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 09:05:10 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 09:05:10 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 09:05:10 AM] [INFO] adding interface: eth0\\n[2018/06/22 09:05:10 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 09:05:10 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] adding interface: eth1\\n[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan20\\n[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan50\\n[2018/06/22 09:05:10 AM] [INFO] adding interface: eth2\\n[2018/06/22 09:05:10 AM] [INFO] applying network configs...\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 09:05:10 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] running ifup on interface: eth2\\n[2018/06/22 09:05:11 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 09:05:11 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 09:05:15 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 09:05:19 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:05:23 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-06-22 09:05:28,907] (heat-config) [DEBUG] [2018-06-22 09:05:09,636] (heat-config) [INFO] interface_name=nic1", "[2018-06-22 09:05:09,636] (heat-config) [INFO] bridge_name=br-ex", "[2018-06-22 09:05:09,636] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", "[2018-06-22 09:05:09,637] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:05:09,637] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-khdfkn36yqgs-0-dpxsps5qjksx-NetworkDeployment-yle4twzvdnzi-TripleOSoftwareDeployment-smqvuunztcz6/abe8a6bc-c9a0-4460-a3ad-bf6b049b1eb3", "[2018-06-22 09:05:09,637] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:05:09,637] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:05:09,637] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/47e3bb7e-dbd0-432c-b417-77caf844175a", "[2018-06-22 09:05:28,903] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-06-22 09:05:28,903] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.15/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.15/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/06/22 09:05:10 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/06/22 09:05:10 AM] [INFO] Ifcfg net config provider created.", "[2018/06/22 09:05:10 AM] [INFO] Not using any mapping file.", "[2018/06/22 09:05:10 AM] [INFO] Finding active nics", "[2018/06/22 09:05:10 AM] [INFO] eth2 is an embedded active nic", "[2018/06/22 09:05:10 AM] [INFO] eth1 is an embedded active nic", "[2018/06/22 09:05:10 AM] [INFO] eth0 is an embedded active nic", "[2018/06/22 09:05:10 AM] [INFO] lo is not an active nic", "[2018/06/22 09:05:10 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/06/22 09:05:10 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/06/22 09:05:10 AM] [INFO] nic3 mapped to: eth2", "[2018/06/22 09:05:10 AM] [INFO] nic2 mapped to: eth1", "[2018/06/22 09:05:10 AM] [INFO] nic1 mapped to: eth0", "[2018/06/22 09:05:10 AM] [INFO] adding interface: eth0", "[2018/06/22 09:05:10 AM] [INFO] adding custom route for interface: eth0", "[2018/06/22 09:05:10 AM] [INFO] adding bridge: br-isolated", "[2018/06/22 09:05:10 AM] [INFO] adding interface: eth1", "[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan20", "[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan30", "[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan50", "[2018/06/22 09:05:10 AM] [INFO] adding interface: eth2", "[2018/06/22 09:05:10 AM] [INFO] applying network configs...", "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth2", "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth1", "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth0", "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan20", "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan50", "[2018/06/22 09:05:10 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/06/22 09:05:10 AM] [INFO] running ifup on bridge: br-isolated", "[2018/06/22 09:05:10 AM] [INFO] running ifup on interface: eth2", "[2018/06/22 09:05:11 AM] [INFO] running ifup on interface: eth1", "[2018/06/22 09:05:11 AM] [INFO] running ifup on interface: eth0", "[2018/06/22 09:05:15 AM] [INFO] running ifup on interface: vlan20", "[2018/06/22 09:05:19 AM] [INFO] running ifup on interface: vlan30", "[2018/06/22 09:05:23 AM] [INFO] running ifup on interface: vlan50", "[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan20", "[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan30", "[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-06-22 09:05:28,903] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/47e3bb7e-dbd0-432c-b417-77caf844175a", "", "[2018-06-22 09:05:28,907] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:05:28,908] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.json < /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.notify.json", "[2018-06-22 09:05:29,324] (heat-config) [INFO] ", "[2018-06-22 09:05:29,325] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:05:29,357 p=21516 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-06-22 09:05:29,455 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:05:09,614] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.json", > "[2018-06-22 09:05:28,906] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.15/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.15/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.21/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 09:05:10 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 09:05:10 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 09:05:10 AM] [INFO] Not using any mapping file.\\n[2018/06/22 09:05:10 AM] [INFO] Finding active nics\\n[2018/06/22 09:05:10 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 09:05:10 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 09:05:10 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 09:05:10 AM] [INFO] lo is not an active nic\\n[2018/06/22 09:05:10 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 09:05:10 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 09:05:10 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 09:05:10 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 09:05:10 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 09:05:10 AM] [INFO] adding interface: eth0\\n[2018/06/22 09:05:10 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 09:05:10 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] adding interface: eth1\\n[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan20\\n[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan50\\n[2018/06/22 09:05:10 AM] [INFO] adding interface: eth2\\n[2018/06/22 09:05:10 AM] [INFO] applying network configs...\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth2\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan20\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan50\\n[2018/06/22 09:05:10 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 09:05:10 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 09:05:10 AM] [INFO] running ifup on interface: eth2\\n[2018/06/22 09:05:11 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 09:05:11 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 09:05:15 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 09:05:19 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:05:23 AM] [INFO] running ifup on interface: vlan50\\n[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan20\\n[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-06-22 09:05:28,907] (heat-config) [DEBUG] [2018-06-22 09:05:09,636] (heat-config) [INFO] interface_name=nic1", > "[2018-06-22 09:05:09,636] (heat-config) [INFO] bridge_name=br-ex", > "[2018-06-22 09:05:09,636] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", > "[2018-06-22 09:05:09,637] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:05:09,637] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-khdfkn36yqgs-0-dpxsps5qjksx-NetworkDeployment-yle4twzvdnzi-TripleOSoftwareDeployment-smqvuunztcz6/abe8a6bc-c9a0-4460-a3ad-bf6b049b1eb3", > "[2018-06-22 09:05:09,637] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:05:09,637] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:05:09,637] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/47e3bb7e-dbd0-432c-b417-77caf844175a", > "[2018-06-22 09:05:28,903] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-06-22 09:05:28,903] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.15/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.15/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.21/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/06/22 09:05:10 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/06/22 09:05:10 AM] [INFO] Ifcfg net config provider created.", > "[2018/06/22 09:05:10 AM] [INFO] Not using any mapping file.", > "[2018/06/22 09:05:10 AM] [INFO] Finding active nics", > "[2018/06/22 09:05:10 AM] [INFO] eth2 is an embedded active nic", > "[2018/06/22 09:05:10 AM] [INFO] eth1 is an embedded active nic", > "[2018/06/22 09:05:10 AM] [INFO] eth0 is an embedded active nic", > "[2018/06/22 09:05:10 AM] [INFO] lo is not an active nic", > "[2018/06/22 09:05:10 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/06/22 09:05:10 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/06/22 09:05:10 AM] [INFO] nic3 mapped to: eth2", > "[2018/06/22 09:05:10 AM] [INFO] nic2 mapped to: eth1", > "[2018/06/22 09:05:10 AM] [INFO] nic1 mapped to: eth0", > "[2018/06/22 09:05:10 AM] [INFO] adding interface: eth0", > "[2018/06/22 09:05:10 AM] [INFO] adding custom route for interface: eth0", > "[2018/06/22 09:05:10 AM] [INFO] adding bridge: br-isolated", > "[2018/06/22 09:05:10 AM] [INFO] adding interface: eth1", > "[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan20", > "[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan30", > "[2018/06/22 09:05:10 AM] [INFO] adding vlan: vlan50", > "[2018/06/22 09:05:10 AM] [INFO] adding interface: eth2", > "[2018/06/22 09:05:10 AM] [INFO] applying network configs...", > "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth2", > "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth1", > "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: eth0", > "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan20", > "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/22 09:05:10 AM] [INFO] running ifdown on interface: vlan50", > "[2018/06/22 09:05:10 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/06/22 09:05:10 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/06/22 09:05:10 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/06/22 09:05:10 AM] [INFO] running ifup on interface: eth2", > "[2018/06/22 09:05:11 AM] [INFO] running ifup on interface: eth1", > "[2018/06/22 09:05:11 AM] [INFO] running ifup on interface: eth0", > "[2018/06/22 09:05:15 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/22 09:05:19 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/22 09:05:23 AM] [INFO] running ifup on interface: vlan50", > "[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan20", > "[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/22 09:05:28 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-06-22 09:05:28,903] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/47e3bb7e-dbd0-432c-b417-77caf844175a", > "", > "[2018-06-22 09:05:28,907] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:05:28,908] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.json < /var/lib/heat-config/deployed/47e3bb7e-dbd0-432c-b417-77caf844175a.notify.json", > "[2018-06-22 09:05:29,324] (heat-config) [INFO] ", > "[2018-06-22 09:05:29,325] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:05:29,474 p=21516 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-06-22 09:05:29,488 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:29,504 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:05:29,598 p=21516 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "8b0f844a-269e-4ed6-ae78-6f38b03e2d2e"}, "changed": false} >2018-06-22 09:05:29,616 p=21516 u=mistral | TASK [Render deployment file for NovaComputeUpgradeInitDeployment] ************* >2018-06-22 09:05:30,256 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "73398fa87f0d6e881139ec9fead6d23e4358858a", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeUpgradeInitDeployment-8b0f844a-269e-4ed6-ae78-6f38b03e2d2e", "gid": 0, "group": "root", "md5sum": "514d2940593b46d28dc263e5a108335d", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1182, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672729.71-11160906959913/source", "state": "file", "uid": 0} >2018-06-22 09:05:30,276 p=21516 u=mistral | TASK [Check if deployed file exists for NovaComputeUpgradeInitDeployment] ****** >2018-06-22 09:05:30,641 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:05:30,661 p=21516 u=mistral | TASK [Check previous deployment rc for NovaComputeUpgradeInitDeployment] ******* >2018-06-22 09:05:30,676 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:30,693 p=21516 u=mistral | TASK [Remove deployed file for NovaComputeUpgradeInitDeployment when previous deployment failed] *** >2018-06-22 09:05:30,710 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:30,729 p=21516 u=mistral | TASK [Force remove deployed file for NovaComputeUpgradeInitDeployment] ********* >2018-06-22 09:05:30,743 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:30,763 p=21516 u=mistral | TASK [Run deployment NovaComputeUpgradeInitDeployment] ************************* >2018-06-22 09:05:31,590 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.notify.json)", "delta": "0:00:00.459874", "end": "2018-06-22 09:05:31.600734", "rc": 0, "start": "2018-06-22 09:05:31.140860", "stderr": "[2018-06-22 09:05:31,165] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.json\n[2018-06-22 09:05:31,193] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:05:31,193] (heat-config) [DEBUG] [2018-06-22 09:05:31,185] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019\n[2018-06-22 09:05:31,185] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:05:31,186] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-khdfkn36yqgs-0-dpxsps5qjksx-NovaComputeUpgradeInitDeployment-rmoezl2i7qyh/f4eb2c20-6c2c-40f3-8332-f45df76ed2ac\n[2018-06-22 09:05:31,186] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:05:31,186] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:05:31,186] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e\n[2018-06-22 09:05:31,190] (heat-config) [INFO] \n[2018-06-22 09:05:31,190] (heat-config) [DEBUG] \n[2018-06-22 09:05:31,190] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e\n\n[2018-06-22 09:05:31,193] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:05:31,194] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.json < /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.notify.json\n[2018-06-22 09:05:31,594] (heat-config) [INFO] \n[2018-06-22 09:05:31,594] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:05:31,165] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.json", "[2018-06-22 09:05:31,193] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:05:31,193] (heat-config) [DEBUG] [2018-06-22 09:05:31,185] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", "[2018-06-22 09:05:31,185] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:05:31,186] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-khdfkn36yqgs-0-dpxsps5qjksx-NovaComputeUpgradeInitDeployment-rmoezl2i7qyh/f4eb2c20-6c2c-40f3-8332-f45df76ed2ac", "[2018-06-22 09:05:31,186] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:05:31,186] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:05:31,186] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e", "[2018-06-22 09:05:31,190] (heat-config) [INFO] ", "[2018-06-22 09:05:31,190] (heat-config) [DEBUG] ", "[2018-06-22 09:05:31,190] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e", "", "[2018-06-22 09:05:31,193] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:05:31,194] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.json < /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.notify.json", "[2018-06-22 09:05:31,594] (heat-config) [INFO] ", "[2018-06-22 09:05:31,594] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:05:31,610 p=21516 u=mistral | TASK [Output for NovaComputeUpgradeInitDeployment] ***************************** >2018-06-22 09:05:31,657 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:05:31,165] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.json", > "[2018-06-22 09:05:31,193] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:05:31,193] (heat-config) [DEBUG] [2018-06-22 09:05:31,185] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", > "[2018-06-22 09:05:31,185] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:05:31,186] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-khdfkn36yqgs-0-dpxsps5qjksx-NovaComputeUpgradeInitDeployment-rmoezl2i7qyh/f4eb2c20-6c2c-40f3-8332-f45df76ed2ac", > "[2018-06-22 09:05:31,186] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:05:31,186] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:05:31,186] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e", > "[2018-06-22 09:05:31,190] (heat-config) [INFO] ", > "[2018-06-22 09:05:31,190] (heat-config) [DEBUG] ", > "[2018-06-22 09:05:31,190] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e", > "", > "[2018-06-22 09:05:31,193] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:05:31,194] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.json < /var/lib/heat-config/deployed/8b0f844a-269e-4ed6-ae78-6f38b03e2d2e.notify.json", > "[2018-06-22 09:05:31,594] (heat-config) [INFO] ", > "[2018-06-22 09:05:31,594] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:05:31,675 p=21516 u=mistral | TASK [Check-mode for Run deployment NovaComputeUpgradeInitDeployment] ********** >2018-06-22 09:05:31,687 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:31,703 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:05:31,825 p=21516 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "e55e7117-4504-4c31-a067-4168ecbaba26"}, "changed": false} >2018-06-22 09:05:31,844 p=21516 u=mistral | TASK [Render deployment file for NovaComputeDeployment] ************************ >2018-06-22 09:05:32,513 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "5d8d2f3cef8fa81a7aca49e891008e35faa2aa4e", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeDeployment-e55e7117-4504-4c31-a067-4168ecbaba26", "gid": 0, "group": "root", "md5sum": "4ddcfbec45f0aafc2dd949b7c6c131c1", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21872, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672731.97-11511695042380/source", "state": "file", "uid": 0} >2018-06-22 09:05:32,533 p=21516 u=mistral | TASK [Check if deployed file exists for NovaComputeDeployment] ***************** >2018-06-22 09:05:32,849 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:05:32,869 p=21516 u=mistral | TASK [Check previous deployment rc for NovaComputeDeployment] ****************** >2018-06-22 09:05:32,885 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:32,903 p=21516 u=mistral | TASK [Remove deployed file for NovaComputeDeployment when previous deployment failed] *** >2018-06-22 09:05:32,919 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:32,937 p=21516 u=mistral | TASK [Force remove deployed file for NovaComputeDeployment] ******************** >2018-06-22 09:05:32,952 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:32,969 p=21516 u=mistral | TASK [Run deployment NovaComputeDeployment] ************************************ >2018-06-22 09:05:33,833 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.notify.json)", "delta": "0:00:00.540341", "end": "2018-06-22 09:05:33.842871", "rc": 0, "start": "2018-06-22 09:05:33.302530", "stderr": "[2018-06-22 09:05:33,328] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.json\n[2018-06-22 09:05:33,441] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:05:33,441] (heat-config) [DEBUG] \n[2018-06-22 09:05:33,441] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-22 09:05:33,441] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.json < /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.notify.json\n[2018-06-22 09:05:33,836] (heat-config) [INFO] \n[2018-06-22 09:05:33,837] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:05:33,328] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.json", "[2018-06-22 09:05:33,441] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:05:33,441] (heat-config) [DEBUG] ", "[2018-06-22 09:05:33,441] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-22 09:05:33,441] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.json < /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.notify.json", "[2018-06-22 09:05:33,836] (heat-config) [INFO] ", "[2018-06-22 09:05:33,837] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:05:33,854 p=21516 u=mistral | TASK [Output for NovaComputeDeployment] **************************************** >2018-06-22 09:05:33,896 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:05:33,328] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.json", > "[2018-06-22 09:05:33,441] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:05:33,441] (heat-config) [DEBUG] ", > "[2018-06-22 09:05:33,441] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-22 09:05:33,441] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.json < /var/lib/heat-config/deployed/e55e7117-4504-4c31-a067-4168ecbaba26.notify.json", > "[2018-06-22 09:05:33,836] (heat-config) [INFO] ", > "[2018-06-22 09:05:33,837] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:05:33,914 p=21516 u=mistral | TASK [Check-mode for Run deployment NovaComputeDeployment] ********************* >2018-06-22 09:05:33,926 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:33,944 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:05:33,994 p=21516 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "0e127163-28f0-47d0-bb3d-c04dba33c833"}, "changed": false} >2018-06-22 09:05:34,012 p=21516 u=mistral | TASK [Render deployment file for ComputeHostsDeployment] *********************** >2018-06-22 09:05:34,608 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "a107fff6a6358d616f2c52ef6d72b4cd6c18dc93", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostsDeployment-0e127163-28f0-47d0-bb3d-c04dba33c833", "gid": 0, "group": "root", "md5sum": "5ec166e072b087088cd4ccebfef196ac", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4079, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672734.06-39916787314538/source", "state": "file", "uid": 0} >2018-06-22 09:05:34,628 p=21516 u=mistral | TASK [Check if deployed file exists for ComputeHostsDeployment] **************** >2018-06-22 09:05:34,951 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:05:34,971 p=21516 u=mistral | TASK [Check previous deployment rc for ComputeHostsDeployment] ***************** >2018-06-22 09:05:34,990 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:35,010 p=21516 u=mistral | TASK [Remove deployed file for ComputeHostsDeployment when previous deployment failed] *** >2018-06-22 09:05:35,025 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:35,042 p=21516 u=mistral | TASK [Force remove deployed file for ComputeHostsDeployment] ******************* >2018-06-22 09:05:35,056 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:35,074 p=21516 u=mistral | TASK [Run deployment ComputeHostsDeployment] *********************************** >2018-06-22 09:05:35,896 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.notify.json)", "delta": "0:00:00.472743", "end": "2018-06-22 09:05:35.880093", "rc": 0, "start": "2018-06-22 09:05:35.407350", "stderr": "[2018-06-22 09:05:35,431] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.json\n[2018-06-22 09:05:35,467] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-06-22 09:05:35,467] (heat-config) [DEBUG] [2018-06-22 09:05:35,451] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019\n[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-64c5vxqf332r-0-qa6lkmhpyfxq/2d69a75c-910f-4816-89e8-e10149463aa7\n[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:05:35,452] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0e127163-28f0-47d0-bb3d-c04dba33c833\n[2018-06-22 09:05:35,464] (heat-config) [INFO] \n[2018-06-22 09:05:35,464] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-06-22 09:05:35,464] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0e127163-28f0-47d0-bb3d-c04dba33c833\n\n[2018-06-22 09:05:35,467] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:05:35,468] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.json < /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.notify.json\n[2018-06-22 09:05:35,873] (heat-config) [INFO] \n[2018-06-22 09:05:35,873] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:05:35,431] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.json", "[2018-06-22 09:05:35,467] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-06-22 09:05:35,467] (heat-config) [DEBUG] [2018-06-22 09:05:35,451] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", "[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-64c5vxqf332r-0-qa6lkmhpyfxq/2d69a75c-910f-4816-89e8-e10149463aa7", "[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:05:35,452] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0e127163-28f0-47d0-bb3d-c04dba33c833", "[2018-06-22 09:05:35,464] (heat-config) [INFO] ", "[2018-06-22 09:05:35,464] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-06-22 09:05:35,464] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0e127163-28f0-47d0-bb3d-c04dba33c833", "", "[2018-06-22 09:05:35,467] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:05:35,468] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.json < /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.notify.json", "[2018-06-22 09:05:35,873] (heat-config) [INFO] ", "[2018-06-22 09:05:35,873] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:05:35,923 p=21516 u=mistral | TASK [Output for ComputeHostsDeployment] *************************************** >2018-06-22 09:05:36,015 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:05:35,431] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.json", > "[2018-06-22 09:05:35,467] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-06-22 09:05:35,467] (heat-config) [DEBUG] [2018-06-22 09:05:35,451] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", > "[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-64c5vxqf332r-0-qa6lkmhpyfxq/2d69a75c-910f-4816-89e8-e10149463aa7", > "[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:05:35,452] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:05:35,452] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/0e127163-28f0-47d0-bb3d-c04dba33c833", > "[2018-06-22 09:05:35,464] (heat-config) [INFO] ", > "[2018-06-22 09:05:35,464] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-06-22 09:05:35,464] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/0e127163-28f0-47d0-bb3d-c04dba33c833", > "", > "[2018-06-22 09:05:35,467] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:05:35,468] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.json < /var/lib/heat-config/deployed/0e127163-28f0-47d0-bb3d-c04dba33c833.notify.json", > "[2018-06-22 09:05:35,873] (heat-config) [INFO] ", > "[2018-06-22 09:05:35,873] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:05:36,043 p=21516 u=mistral | TASK [Check-mode for Run deployment ComputeHostsDeployment] ******************** >2018-06-22 09:05:36,057 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:36,075 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:05:36,210 p=21516 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "137d1442-18f5-4a30-aff5-8841fe26771b"}, "changed": false} >2018-06-22 09:05:36,230 p=21516 u=mistral | TASK [Render deployment file for ComputeAllNodesDeployment] ******************** >2018-06-22 09:05:36,943 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "67ec1da5124596be4df90bd2fbb1cded1691f341", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesDeployment-137d1442-18f5-4a30-aff5-8841fe26771b", "gid": 0, "group": "root", "md5sum": "6d57ce132944da4d6afe3a16d9de42d7", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19022, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672736.37-190950519839116/source", "state": "file", "uid": 0} >2018-06-22 09:05:36,962 p=21516 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesDeployment] ************* >2018-06-22 09:05:37,279 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:05:37,300 p=21516 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesDeployment] ************** >2018-06-22 09:05:37,316 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:37,334 p=21516 u=mistral | TASK [Remove deployed file for ComputeAllNodesDeployment when previous deployment failed] *** >2018-06-22 09:05:37,348 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:37,365 p=21516 u=mistral | TASK [Force remove deployed file for ComputeAllNodesDeployment] **************** >2018-06-22 09:05:37,379 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:37,397 p=21516 u=mistral | TASK [Run deployment ComputeAllNodesDeployment] ******************************** >2018-06-22 09:05:38,274 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/137d1442-18f5-4a30-aff5-8841fe26771b.notify.json)", "delta": "0:00:00.547426", "end": "2018-06-22 09:05:38.283440", "rc": 0, "start": "2018-06-22 09:05:37.736014", "stderr": "[2018-06-22 09:05:37,761] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/137d1442-18f5-4a30-aff5-8841fe26771b.json\n[2018-06-22 09:05:37,879] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:05:37,879] (heat-config) [DEBUG] \n[2018-06-22 09:05:37,879] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-22 09:05:37,879] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/137d1442-18f5-4a30-aff5-8841fe26771b.json < /var/lib/heat-config/deployed/137d1442-18f5-4a30-aff5-8841fe26771b.notify.json\n[2018-06-22 09:05:38,276] (heat-config) [INFO] \n[2018-06-22 09:05:38,276] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:05:37,761] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/137d1442-18f5-4a30-aff5-8841fe26771b.json", "[2018-06-22 09:05:37,879] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:05:37,879] (heat-config) [DEBUG] ", "[2018-06-22 09:05:37,879] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-22 09:05:37,879] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/137d1442-18f5-4a30-aff5-8841fe26771b.json < /var/lib/heat-config/deployed/137d1442-18f5-4a30-aff5-8841fe26771b.notify.json", "[2018-06-22 09:05:38,276] (heat-config) [INFO] ", "[2018-06-22 09:05:38,276] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:05:38,292 p=21516 u=mistral | TASK [Output for ComputeAllNodesDeployment] ************************************ >2018-06-22 09:05:38,335 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:05:37,761] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/137d1442-18f5-4a30-aff5-8841fe26771b.json", > "[2018-06-22 09:05:37,879] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:05:37,879] (heat-config) [DEBUG] ", > "[2018-06-22 09:05:37,879] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-22 09:05:37,879] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/137d1442-18f5-4a30-aff5-8841fe26771b.json < /var/lib/heat-config/deployed/137d1442-18f5-4a30-aff5-8841fe26771b.notify.json", > "[2018-06-22 09:05:38,276] (heat-config) [INFO] ", > "[2018-06-22 09:05:38,276] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:05:38,353 p=21516 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesDeployment] ***************** >2018-06-22 09:05:38,366 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:38,383 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:05:38,436 p=21516 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "f2540af8-b807-43d6-8d1d-79e70a51b657"}, "changed": false} >2018-06-22 09:05:38,455 p=21516 u=mistral | TASK [Render deployment file for ComputeAllNodesValidationDeployment] ********** >2018-06-22 09:05:39,075 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "f27e5da7a63770b4c2d9d9e30098384ec28a9a3e", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesValidationDeployment-f2540af8-b807-43d6-8d1d-79e70a51b657", "gid": 0, "group": "root", "md5sum": "5ffca5901239dc0d3fbdcf735f86ecda", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4934, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672738.51-96623162402541/source", "state": "file", "uid": 0} >2018-06-22 09:05:39,094 p=21516 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesValidationDeployment] *** >2018-06-22 09:05:39,416 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:05:39,436 p=21516 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesValidationDeployment] **** >2018-06-22 09:05:39,453 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:39,471 p=21516 u=mistral | TASK [Remove deployed file for ComputeAllNodesValidationDeployment when previous deployment failed] *** >2018-06-22 09:05:39,487 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:39,505 p=21516 u=mistral | TASK [Force remove deployed file for ComputeAllNodesValidationDeployment] ****** >2018-06-22 09:05:39,521 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:39,538 p=21516 u=mistral | TASK [Run deployment ComputeAllNodesValidationDeployment] ********************** >2018-06-22 09:05:40,810 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.notify.json)", "delta": "0:00:00.940721", "end": "2018-06-22 09:05:40.819034", "rc": 0, "start": "2018-06-22 09:05:39.878313", "stderr": "[2018-06-22 09:05:39,902] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.json\n[2018-06-22 09:05:40,409] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:05:40,409] (heat-config) [DEBUG] [2018-06-22 09:05:39,921] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8\n[2018-06-22 09:05:39,921] (heat-config) [INFO] validate_fqdn=False\n[2018-06-22 09:05:39,921] (heat-config) [INFO] validate_ntp=True\n[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019\n[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-ckei37fomwyn-0-yfwz3feik3kf/1585afad-b876-4786-a86a-85246cedf14e\n[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:05:39,921] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f2540af8-b807-43d6-8d1d-79e70a51b657\n[2018-06-22 09:05:40,405] (heat-config) [INFO] Trying to ping 172.17.1.16 for local network 172.17.1.0/24.\nPing to 172.17.1.16 succeeded.\nSUCCESS\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\nPing to 172.17.2.15 succeeded.\nSUCCESS\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\nPing to 172.17.3.18 succeeded.\nSUCCESS\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\nPing to 192.168.24.8 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nSUCCESS\n\n[2018-06-22 09:05:40,405] (heat-config) [DEBUG] \n[2018-06-22 09:05:40,405] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f2540af8-b807-43d6-8d1d-79e70a51b657\n\n[2018-06-22 09:05:40,409] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:05:40,409] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.json < /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.notify.json\n[2018-06-22 09:05:40,812] (heat-config) [INFO] \n[2018-06-22 09:05:40,812] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:05:39,902] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.json", "[2018-06-22 09:05:40,409] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:05:40,409] (heat-config) [DEBUG] [2018-06-22 09:05:39,921] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8", "[2018-06-22 09:05:39,921] (heat-config) [INFO] validate_fqdn=False", "[2018-06-22 09:05:39,921] (heat-config) [INFO] validate_ntp=True", "[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", "[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-ckei37fomwyn-0-yfwz3feik3kf/1585afad-b876-4786-a86a-85246cedf14e", "[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:05:39,921] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f2540af8-b807-43d6-8d1d-79e70a51b657", "[2018-06-22 09:05:40,405] (heat-config) [INFO] Trying to ping 172.17.1.16 for local network 172.17.1.0/24.", "Ping to 172.17.1.16 succeeded.", "SUCCESS", "Trying to ping 172.17.2.15 for local network 172.17.2.0/24.", "Ping to 172.17.2.15 succeeded.", "SUCCESS", "Trying to ping 172.17.3.18 for local network 172.17.3.0/24.", "Ping to 172.17.3.18 succeeded.", "SUCCESS", "Trying to ping 192.168.24.8 for local network 192.168.24.0/24.", "Ping to 192.168.24.8 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "SUCCESS", "", "[2018-06-22 09:05:40,405] (heat-config) [DEBUG] ", "[2018-06-22 09:05:40,405] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f2540af8-b807-43d6-8d1d-79e70a51b657", "", "[2018-06-22 09:05:40,409] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:05:40,409] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.json < /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.notify.json", "[2018-06-22 09:05:40,812] (heat-config) [INFO] ", "[2018-06-22 09:05:40,812] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:05:40,829 p=21516 u=mistral | TASK [Output for ComputeAllNodesValidationDeployment] ************************** >2018-06-22 09:05:40,875 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:05:39,902] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.json", > "[2018-06-22 09:05:40,409] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.15 for local network 172.17.2.0/24.\\nPing to 172.17.2.15 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:05:40,409] (heat-config) [DEBUG] [2018-06-22 09:05:39,921] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8", > "[2018-06-22 09:05:39,921] (heat-config) [INFO] validate_fqdn=False", > "[2018-06-22 09:05:39,921] (heat-config) [INFO] validate_ntp=True", > "[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", > "[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-ckei37fomwyn-0-yfwz3feik3kf/1585afad-b876-4786-a86a-85246cedf14e", > "[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:05:39,921] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:05:39,921] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/f2540af8-b807-43d6-8d1d-79e70a51b657", > "[2018-06-22 09:05:40,405] (heat-config) [INFO] Trying to ping 172.17.1.16 for local network 172.17.1.0/24.", > "Ping to 172.17.1.16 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.15 for local network 172.17.2.0/24.", > "Ping to 172.17.2.15 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.18 for local network 172.17.3.0/24.", > "Ping to 172.17.3.18 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.8 for local network 192.168.24.0/24.", > "Ping to 192.168.24.8 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "SUCCESS", > "", > "[2018-06-22 09:05:40,405] (heat-config) [DEBUG] ", > "[2018-06-22 09:05:40,405] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/f2540af8-b807-43d6-8d1d-79e70a51b657", > "", > "[2018-06-22 09:05:40,409] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:05:40,409] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.json < /var/lib/heat-config/deployed/f2540af8-b807-43d6-8d1d-79e70a51b657.notify.json", > "[2018-06-22 09:05:40,812] (heat-config) [INFO] ", > "[2018-06-22 09:05:40,812] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:05:40,892 p=21516 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesValidationDeployment] ******* >2018-06-22 09:05:40,905 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:40,922 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:05:40,996 p=21516 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "a65e71e4-ed81-41a4-8893-3eac5cffc60b"}, "changed": false} >2018-06-22 09:05:41,014 p=21516 u=mistral | TASK [Render deployment file for ComputeHostPrepDeployment] ******************** >2018-06-22 09:05:41,678 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "9b8f298ec7fcc76910ae1c371282e1b4fb7e6fb8", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostPrepDeployment-a65e71e4-ed81-41a4-8893-3eac5cffc60b", "gid": 0, "group": "root", "md5sum": "e3b0f4b160ba4fc2257d7ca9f20c237f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 33672, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672741.09-222499356363594/source", "state": "file", "uid": 0} >2018-06-22 09:05:41,701 p=21516 u=mistral | TASK [Check if deployed file exists for ComputeHostPrepDeployment] ************* >2018-06-22 09:05:42,052 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:05:42,073 p=21516 u=mistral | TASK [Check previous deployment rc for ComputeHostPrepDeployment] ************** >2018-06-22 09:05:42,095 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:42,116 p=21516 u=mistral | TASK [Remove deployed file for ComputeHostPrepDeployment when previous deployment failed] *** >2018-06-22 09:05:42,133 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:42,152 p=21516 u=mistral | TASK [Force remove deployed file for ComputeHostPrepDeployment] **************** >2018-06-22 09:05:42,168 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:42,189 p=21516 u=mistral | TASK [Run deployment ComputeHostPrepDeployment] ******************************** >2018-06-22 09:05:52,682 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.notify.json)", "delta": "0:00:10.143088", "end": "2018-06-22 09:05:52.683275", "rc": 0, "start": "2018-06-22 09:05:42.540187", "stderr": "[2018-06-22 09:05:42,567] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.json\n[2018-06-22 09:05:52,282] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}\n[2018-06-22 09:05:52,283] (heat-config) [DEBUG] [2018-06-22 09:05:42,591] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_variables.json\n[2018-06-22 09:05:52,278] (heat-config) [INFO] Return code 0\n[2018-06-22 09:05:52,278] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [ceilometer logs readme] **************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}\n...ignoring\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost] => (item=/var/log/containers/neutron)\n\nTASK [neutron logs readme] *****************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}\n...ignoring\n\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\nok: [localhost]\n\nTASK [Stop and disable iscsid.socket service] **********************************\nchanged: [localhost]\n\nTASK [create persistent logs directory] ****************************************\nchanged: [localhost]\n\nTASK [nova logs readme] ********************************************************\nfatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}\n...ignoring\n\nTASK [Mount Nova NFS Share] ****************************************************\nskipping: [localhost]\n\nTASK [create persistent directories] *******************************************\nchanged: [localhost] => (item=/var/lib/nova)\nok: [localhost] => (item=/var/lib/libvirt)\n\nTASK [ensure ceph configurations exist] ****************************************\nchanged: [localhost]\n\nTASK [is Instance HA enabled] **************************************************\nok: [localhost]\n\nTASK [prepare Instance HA script directory] ************************************\nskipping: [localhost]\n\nTASK [install Instance HA script that runs nova-compute] ***********************\nskipping: [localhost]\n\nTASK [Get list of instance HA compute nodes] ***********************************\nskipping: [localhost]\n\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\nskipping: [localhost]\n\nTASK [create libvirt persistent data directories] ******************************\nok: [localhost] => (item=/etc/libvirt)\nok: [localhost] => (item=/etc/libvirt/secrets)\nok: [localhost] => (item=/etc/libvirt/qemu)\nok: [localhost] => (item=/var/lib/libvirt)\nchanged: [localhost] => (item=/var/log/containers/libvirt)\n\nTASK [ensure qemu group is present on the host] ********************************\nok: [localhost]\n\nTASK [ensure qemu user is present on the host] *********************************\nok: [localhost]\n\nTASK [create directory for vhost-user sockets with qemu ownership] *************\nchanged: [localhost]\n\nTASK [check if libvirt is installed] *******************************************\nchanged: [localhost]\n\nTASK [make sure libvirt services are disabled] *********************************\nchanged: [localhost] => (item=libvirtd.service)\nchanged: [localhost] => (item=virtlogd.socket)\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \n\n\n[2018-06-22 09:05:52,278] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\ncan add warn=False to this command task or set command_warnings=False in\nansible.cfg to get rid of this message.\n\n[2018-06-22 09:05:52,278] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_playbook.yaml\n\n[2018-06-22 09:05:52,283] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-06-22 09:05:52,283] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.json < /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.notify.json\n[2018-06-22 09:05:52,677] (heat-config) [INFO] \n[2018-06-22 09:05:52,677] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:05:42,567] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.json", "[2018-06-22 09:05:52,282] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}", "[2018-06-22 09:05:52,283] (heat-config) [DEBUG] [2018-06-22 09:05:42,591] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_variables.json", "[2018-06-22 09:05:52,278] (heat-config) [INFO] Return code 0", "[2018-06-22 09:05:52,278] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [ceilometer logs readme] **************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", "...ignoring", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost] => (item=/var/log/containers/neutron)", "", "TASK [neutron logs readme] *****************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", "...ignoring", "", "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", "ok: [localhost]", "", "TASK [Stop and disable iscsid.socket service] **********************************", "changed: [localhost]", "", "TASK [create persistent logs directory] ****************************************", "changed: [localhost]", "", "TASK [nova logs readme] ********************************************************", "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", "...ignoring", "", "TASK [Mount Nova NFS Share] ****************************************************", "skipping: [localhost]", "", "TASK [create persistent directories] *******************************************", "changed: [localhost] => (item=/var/lib/nova)", "ok: [localhost] => (item=/var/lib/libvirt)", "", "TASK [ensure ceph configurations exist] ****************************************", "changed: [localhost]", "", "TASK [is Instance HA enabled] **************************************************", "ok: [localhost]", "", "TASK [prepare Instance HA script directory] ************************************", "skipping: [localhost]", "", "TASK [install Instance HA script that runs nova-compute] ***********************", "skipping: [localhost]", "", "TASK [Get list of instance HA compute nodes] ***********************************", "skipping: [localhost]", "", "TASK [If instance HA is enabled on the node activate the evacuation completed check] ***", "skipping: [localhost]", "", "TASK [create libvirt persistent data directories] ******************************", "ok: [localhost] => (item=/etc/libvirt)", "ok: [localhost] => (item=/etc/libvirt/secrets)", "ok: [localhost] => (item=/etc/libvirt/qemu)", "ok: [localhost] => (item=/var/lib/libvirt)", "changed: [localhost] => (item=/var/log/containers/libvirt)", "", "TASK [ensure qemu group is present on the host] ********************************", "ok: [localhost]", "", "TASK [ensure qemu user is present on the host] *********************************", "ok: [localhost]", "", "TASK [create directory for vhost-user sockets with qemu ownership] *************", "changed: [localhost]", "", "TASK [check if libvirt is installed] *******************************************", "changed: [localhost]", "", "TASK [make sure libvirt services are disabled] *********************************", "changed: [localhost] => (item=libvirtd.service)", "changed: [localhost] => (item=virtlogd.socket)", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=20 changed=12 unreachable=0 failed=0 ", "", "", "[2018-06-22 09:05:52,278] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running", "rpm. If you need to use command because yum, dnf or zypper is insufficient you", "can add warn=False to this command task or set command_warnings=False in", "ansible.cfg to get rid of this message.", "", "[2018-06-22 09:05:52,278] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_playbook.yaml", "", "[2018-06-22 09:05:52,283] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-06-22 09:05:52,283] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.json < /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.notify.json", "[2018-06-22 09:05:52,677] (heat-config) [INFO] ", "[2018-06-22 09:05:52,677] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:05:52,700 p=21516 u=mistral | TASK [Output for ComputeHostPrepDeployment] ************************************ >2018-06-22 09:05:52,748 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:05:42,567] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.json", > "[2018-06-22 09:05:52,282] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [ceilometer logs readme] **************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\\\", \\\"msg\\\": \\\"Destination directory /var/log/ceilometer does not exist\\\"}\\n...ignoring\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost] => (item=/var/log/containers/neutron)\\n\\nTASK [neutron logs readme] *****************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"f5a95f434a4aad25a9a81a045dec39159a6e8864\\\", \\\"msg\\\": \\\"Destination directory /var/log/neutron does not exist\\\"}\\n...ignoring\\n\\nTASK [stat /lib/systemd/system/iscsid.socket] **********************************\\nok: [localhost]\\n\\nTASK [Stop and disable iscsid.socket service] **********************************\\nchanged: [localhost]\\n\\nTASK [create persistent logs directory] ****************************************\\nchanged: [localhost]\\n\\nTASK [nova logs readme] ********************************************************\\nfatal: [localhost]: FAILED! => {\\\"changed\\\": false, \\\"checksum\\\": \\\"c2216cc4edf5d3ce90f10748c3243db4e1842a85\\\", \\\"msg\\\": \\\"Destination directory /var/log/nova does not exist\\\"}\\n...ignoring\\n\\nTASK [Mount Nova NFS Share] ****************************************************\\nskipping: [localhost]\\n\\nTASK [create persistent directories] *******************************************\\nchanged: [localhost] => (item=/var/lib/nova)\\nok: [localhost] => (item=/var/lib/libvirt)\\n\\nTASK [ensure ceph configurations exist] ****************************************\\nchanged: [localhost]\\n\\nTASK [is Instance HA enabled] **************************************************\\nok: [localhost]\\n\\nTASK [prepare Instance HA script directory] ************************************\\nskipping: [localhost]\\n\\nTASK [install Instance HA script that runs nova-compute] ***********************\\nskipping: [localhost]\\n\\nTASK [Get list of instance HA compute nodes] ***********************************\\nskipping: [localhost]\\n\\nTASK [If instance HA is enabled on the node activate the evacuation completed check] ***\\nskipping: [localhost]\\n\\nTASK [create libvirt persistent data directories] ******************************\\nok: [localhost] => (item=/etc/libvirt)\\nok: [localhost] => (item=/etc/libvirt/secrets)\\nok: [localhost] => (item=/etc/libvirt/qemu)\\nok: [localhost] => (item=/var/lib/libvirt)\\nchanged: [localhost] => (item=/var/log/containers/libvirt)\\n\\nTASK [ensure qemu group is present on the host] ********************************\\nok: [localhost]\\n\\nTASK [ensure qemu user is present on the host] *********************************\\nok: [localhost]\\n\\nTASK [create directory for vhost-user sockets with qemu ownership] *************\\nchanged: [localhost]\\n\\nTASK [check if libvirt is installed] *******************************************\\nchanged: [localhost]\\n\\nTASK [make sure libvirt services are disabled] *********************************\\nchanged: [localhost] => (item=libvirtd.service)\\nchanged: [localhost] => (item=virtlogd.socket)\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=20 changed=12 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \" [WARNING]: Consider using the yum, dnf or zypper module rather than running\\nrpm. If you need to use command because yum, dnf or zypper is insufficient you\\ncan add warn=False to this command task or set command_warnings=False in\\nansible.cfg to get rid of this message.\\n\", \"deploy_status_code\": 0}", > "[2018-06-22 09:05:52,283] (heat-config) [DEBUG] [2018-06-22 09:05:42,591] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_variables.json", > "[2018-06-22 09:05:52,278] (heat-config) [INFO] Return code 0", > "[2018-06-22 09:05:52,278] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [ceilometer logs readme] **************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3\", \"msg\": \"Destination directory /var/log/ceilometer does not exist\"}", > "...ignoring", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost] => (item=/var/log/containers/neutron)", > "", > "TASK [neutron logs readme] *****************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"f5a95f434a4aad25a9a81a045dec39159a6e8864\", \"msg\": \"Destination directory /var/log/neutron does not exist\"}", > "...ignoring", > "", > "TASK [stat /lib/systemd/system/iscsid.socket] **********************************", > "ok: [localhost]", > "", > "TASK [Stop and disable iscsid.socket service] **********************************", > "changed: [localhost]", > "", > "TASK [create persistent logs directory] ****************************************", > "changed: [localhost]", > "", > "TASK [nova logs readme] ********************************************************", > "fatal: [localhost]: FAILED! => {\"changed\": false, \"checksum\": \"c2216cc4edf5d3ce90f10748c3243db4e1842a85\", \"msg\": \"Destination directory /var/log/nova does not exist\"}", > "...ignoring", > "", > "TASK [Mount Nova NFS Share] ****************************************************", > "skipping: [localhost]", > "", > "TASK [create persistent directories] *******************************************", > "changed: [localhost] => (item=/var/lib/nova)", > "ok: [localhost] => (item=/var/lib/libvirt)", > "", > "TASK [ensure ceph configurations exist] ****************************************", > "changed: [localhost]", > "", > "TASK [is Instance HA enabled] **************************************************", > "ok: [localhost]", > "", > "TASK [prepare Instance HA script directory] ************************************", > "skipping: [localhost]", > "", > "TASK [install Instance HA script that runs nova-compute] ***********************", > "skipping: [localhost]", > "", > "TASK [Get list of instance HA compute nodes] ***********************************", > "skipping: [localhost]", > "", > "TASK [If instance HA is enabled on the node activate the evacuation completed check] ***", > "skipping: [localhost]", > "", > "TASK [create libvirt persistent data directories] ******************************", > "ok: [localhost] => (item=/etc/libvirt)", > "ok: [localhost] => (item=/etc/libvirt/secrets)", > "ok: [localhost] => (item=/etc/libvirt/qemu)", > "ok: [localhost] => (item=/var/lib/libvirt)", > "changed: [localhost] => (item=/var/log/containers/libvirt)", > "", > "TASK [ensure qemu group is present on the host] ********************************", > "ok: [localhost]", > "", > "TASK [ensure qemu user is present on the host] *********************************", > "ok: [localhost]", > "", > "TASK [create directory for vhost-user sockets with qemu ownership] *************", > "changed: [localhost]", > "", > "TASK [check if libvirt is installed] *******************************************", > "changed: [localhost]", > "", > "TASK [make sure libvirt services are disabled] *********************************", > "changed: [localhost] => (item=libvirtd.service)", > "changed: [localhost] => (item=virtlogd.socket)", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=20 changed=12 unreachable=0 failed=0 ", > "", > "", > "[2018-06-22 09:05:52,278] (heat-config) [INFO] [WARNING]: Consider using the yum, dnf or zypper module rather than running", > "rpm. If you need to use command because yum, dnf or zypper is insufficient you", > "can add warn=False to this command task or set command_warnings=False in", > "ansible.cfg to get rid of this message.", > "", > "[2018-06-22 09:05:52,278] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/a65e71e4-ed81-41a4-8893-3eac5cffc60b_playbook.yaml", > "", > "[2018-06-22 09:05:52,283] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-06-22 09:05:52,283] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.json < /var/lib/heat-config/deployed/a65e71e4-ed81-41a4-8893-3eac5cffc60b.notify.json", > "[2018-06-22 09:05:52,677] (heat-config) [INFO] ", > "[2018-06-22 09:05:52,677] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:05:52,767 p=21516 u=mistral | TASK [Check-mode for Run deployment ComputeHostPrepDeployment] ***************** >2018-06-22 09:05:52,781 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:52,797 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:05:52,846 p=21516 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "d5e3aec3-d014-48ff-82ff-fd73b9664e9f"}, "changed": false} >2018-06-22 09:05:52,864 p=21516 u=mistral | TASK [Render deployment file for ComputeArtifactsDeploy] *********************** >2018-06-22 09:05:53,466 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "5d38c68b0f75dce30ee514bc20f687e8722a78ed", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeArtifactsDeploy-d5e3aec3-d014-48ff-82ff-fd73b9664e9f", "gid": 0, "group": "root", "md5sum": "97052ac15c16557a8a42e062b22280d0", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2015, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672752.91-257281006733238/source", "state": "file", "uid": 0} >2018-06-22 09:05:53,484 p=21516 u=mistral | TASK [Check if deployed file exists for ComputeArtifactsDeploy] **************** >2018-06-22 09:05:53,807 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:05:53,826 p=21516 u=mistral | TASK [Check previous deployment rc for ComputeArtifactsDeploy] ***************** >2018-06-22 09:05:53,842 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:53,860 p=21516 u=mistral | TASK [Remove deployed file for ComputeArtifactsDeploy when previous deployment failed] *** >2018-06-22 09:05:53,875 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:53,893 p=21516 u=mistral | TASK [Force remove deployed file for ComputeArtifactsDeploy] ******************* >2018-06-22 09:05:53,908 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:53,926 p=21516 u=mistral | TASK [Run deployment ComputeArtifactsDeploy] *********************************** >2018-06-22 09:05:54,706 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.notify.json)", "delta": "0:00:00.444976", "end": "2018-06-22 09:05:54.711195", "rc": 0, "start": "2018-06-22 09:05:54.266219", "stderr": "[2018-06-22 09:05:54,290] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.json\n[2018-06-22 09:05:54,319] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:05:54,319] (heat-config) [DEBUG] [2018-06-22 09:05:54,310] (heat-config) [INFO] artifact_urls=\n[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019\n[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-ComputeArtifactsDeploy-bvgvx6drqjy3-0-7nr5oh24jef2/9e2ce371-b47b-4776-b3c2-0ca2a62385c7\n[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:05:54,311] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d5e3aec3-d014-48ff-82ff-fd73b9664e9f\n[2018-06-22 09:05:54,316] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-06-22 09:05:54,316] (heat-config) [DEBUG] \n[2018-06-22 09:05:54,316] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d5e3aec3-d014-48ff-82ff-fd73b9664e9f\n\n[2018-06-22 09:05:54,319] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:05:54,320] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.json < /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.notify.json\n[2018-06-22 09:05:54,705] (heat-config) [INFO] \n[2018-06-22 09:05:54,705] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:05:54,290] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.json", "[2018-06-22 09:05:54,319] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:05:54,319] (heat-config) [DEBUG] [2018-06-22 09:05:54,310] (heat-config) [INFO] artifact_urls=", "[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", "[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-ComputeArtifactsDeploy-bvgvx6drqjy3-0-7nr5oh24jef2/9e2ce371-b47b-4776-b3c2-0ca2a62385c7", "[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:05:54,311] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d5e3aec3-d014-48ff-82ff-fd73b9664e9f", "[2018-06-22 09:05:54,316] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-06-22 09:05:54,316] (heat-config) [DEBUG] ", "[2018-06-22 09:05:54,316] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d5e3aec3-d014-48ff-82ff-fd73b9664e9f", "", "[2018-06-22 09:05:54,319] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:05:54,320] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.json < /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.notify.json", "[2018-06-22 09:05:54,705] (heat-config) [INFO] ", "[2018-06-22 09:05:54,705] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:05:54,726 p=21516 u=mistral | TASK [Output for ComputeArtifactsDeploy] *************************************** >2018-06-22 09:05:54,773 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:05:54,290] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.json", > "[2018-06-22 09:05:54,319] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:05:54,319] (heat-config) [DEBUG] [2018-06-22 09:05:54,310] (heat-config) [INFO] artifact_urls=", > "[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_server_id=5592bd3b-3706-4a5e-bb8e-c90f12b8f019", > "[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-ComputeArtifactsDeploy-bvgvx6drqjy3-0-7nr5oh24jef2/9e2ce371-b47b-4776-b3c2-0ca2a62385c7", > "[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:05:54,311] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:05:54,311] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d5e3aec3-d014-48ff-82ff-fd73b9664e9f", > "[2018-06-22 09:05:54,316] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-06-22 09:05:54,316] (heat-config) [DEBUG] ", > "[2018-06-22 09:05:54,316] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d5e3aec3-d014-48ff-82ff-fd73b9664e9f", > "", > "[2018-06-22 09:05:54,319] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:05:54,320] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.json < /var/lib/heat-config/deployed/d5e3aec3-d014-48ff-82ff-fd73b9664e9f.notify.json", > "[2018-06-22 09:05:54,705] (heat-config) [INFO] ", > "[2018-06-22 09:05:54,705] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:05:54,793 p=21516 u=mistral | TASK [Check-mode for Run deployment ComputeArtifactsDeploy] ******************** >2018-06-22 09:05:54,806 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:54,828 p=21516 u=mistral | TASK [include] ***************************************************************** >2018-06-22 09:05:54,906 p=21516 u=mistral | TASK [include] ***************************************************************** >2018-06-22 09:05:54,995 p=21516 u=mistral | TASK [include] ***************************************************************** >2018-06-22 09:05:55,206 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/CephStorage/deployments.yaml for ceph-0 >2018-06-22 09:05:55,214 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/CephStorage/deployments.yaml for ceph-0 >2018-06-22 09:05:55,221 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/CephStorage/deployments.yaml for ceph-0 >2018-06-22 09:05:55,229 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/CephStorage/deployments.yaml for ceph-0 >2018-06-22 09:05:55,236 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/CephStorage/deployments.yaml for ceph-0 >2018-06-22 09:05:55,244 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/CephStorage/deployments.yaml for ceph-0 >2018-06-22 09:05:55,251 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/CephStorage/deployments.yaml for ceph-0 >2018-06-22 09:05:55,258 p=21516 u=mistral | included: /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/CephStorage/deployments.yaml for ceph-0 >2018-06-22 09:05:55,323 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:05:55,381 p=21516 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "3f5d31fd-1ed4-43e5-9d1a-3866348fbafa"}, "changed": false} >2018-06-22 09:05:55,400 p=21516 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-06-22 09:05:55,964 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "4bed262f05bf9fd9720074015cf870f2b376bdc5", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-3f5d31fd-1ed4-43e5-9d1a-3866348fbafa", "gid": 0, "group": "root", "md5sum": "7249706de9bca0a2769e4983649b08a4", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 8777, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672755.46-272141681898444/source", "state": "file", "uid": 0} >2018-06-22 09:05:55,985 p=21516 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-06-22 09:05:56,281 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:05:56,301 p=21516 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-06-22 09:05:56,320 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:56,337 p=21516 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-06-22 09:05:56,355 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:56,372 p=21516 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-06-22 09:05:56,389 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:05:56,406 p=21516 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-06-22 09:06:11,542 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.notify.json)", "delta": "0:00:14.822303", "end": "2018-06-22 09:06:11.544399", "rc": 0, "start": "2018-06-22 09:05:56.722096", "stderr": "[2018-06-22 09:05:56,745] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.json\n[2018-06-22 09:06:11,138] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 09:05:57 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 09:05:57 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 09:05:57 AM] [INFO] Not using any mapping file.\\n[2018/06/22 09:05:57 AM] [INFO] Finding active nics\\n[2018/06/22 09:05:57 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 09:05:57 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 09:05:57 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 09:05:57 AM] [INFO] lo is not an active nic\\n[2018/06/22 09:05:57 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 09:05:57 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 09:05:57 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 09:05:57 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 09:05:57 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 09:05:57 AM] [INFO] adding interface: eth0\\n[2018/06/22 09:05:57 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 09:05:57 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] adding interface: eth1\\n[2018/06/22 09:05:57 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 09:05:57 AM] [INFO] adding vlan: vlan40\\n[2018/06/22 09:05:57 AM] [INFO] applying network configs...\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 09:05:57 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 09:05:58 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 09:06:02 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:06:06 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 09:06:10 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:06:10 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-06-22 09:06:11,138] (heat-config) [DEBUG] [2018-06-22 09:05:56,764] (heat-config) [INFO] interface_name=nic1\n[2018-06-22 09:05:56,764] (heat-config) [INFO] bridge_name=br-ex\n[2018-06-22 09:05:56,764] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96\n[2018-06-22 09:05:56,764] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:05:56,764] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-dcrpu75ghvmg-0-jybo3u4pnq7o-NetworkDeployment-kdxxldjgvahy-TripleOSoftwareDeployment-owarhab7awno/526eb1d9-c967-46a7-9d09-85871ebc086e\n[2018-06-22 09:05:56,765] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:05:56,765] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:05:56,765] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa\n[2018-06-22 09:06:11,134] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS\n\n[2018-06-22 09:06:11,135] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/06/22 09:05:57 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/06/22 09:05:57 AM] [INFO] Ifcfg net config provider created.\n[2018/06/22 09:05:57 AM] [INFO] Not using any mapping file.\n[2018/06/22 09:05:57 AM] [INFO] Finding active nics\n[2018/06/22 09:05:57 AM] [INFO] eth1 is an embedded active nic\n[2018/06/22 09:05:57 AM] [INFO] eth0 is an embedded active nic\n[2018/06/22 09:05:57 AM] [INFO] eth2 is an embedded active nic\n[2018/06/22 09:05:57 AM] [INFO] lo is not an active nic\n[2018/06/22 09:05:57 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/06/22 09:05:57 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/06/22 09:05:57 AM] [INFO] nic3 mapped to: eth2\n[2018/06/22 09:05:57 AM] [INFO] nic2 mapped to: eth1\n[2018/06/22 09:05:57 AM] [INFO] nic1 mapped to: eth0\n[2018/06/22 09:05:57 AM] [INFO] adding interface: eth0\n[2018/06/22 09:05:57 AM] [INFO] adding custom route for interface: eth0\n[2018/06/22 09:05:57 AM] [INFO] adding bridge: br-isolated\n[2018/06/22 09:05:57 AM] [INFO] adding interface: eth1\n[2018/06/22 09:05:57 AM] [INFO] adding vlan: vlan30\n[2018/06/22 09:05:57 AM] [INFO] adding vlan: vlan40\n[2018/06/22 09:05:57 AM] [INFO] applying network configs...\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: eth1\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: eth0\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan30\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan40\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/06/22 09:05:57 AM] [INFO] running ifup on bridge: br-isolated\n[2018/06/22 09:05:57 AM] [INFO] running ifup on interface: eth1\n[2018/06/22 09:05:58 AM] [INFO] running ifup on interface: eth0\n[2018/06/22 09:06:02 AM] [INFO] running ifup on interface: vlan30\n[2018/06/22 09:06:06 AM] [INFO] running ifup on interface: vlan40\n[2018/06/22 09:06:10 AM] [INFO] running ifup on interface: vlan30\n[2018/06/22 09:06:10 AM] [INFO] running ifup on interface: vlan40\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.3\n++ '[' -n 192.168.24.3 ']'\n++ break\n++ echo 192.168.24.3\n+ local METADATA_IP=192.168.24.3\n+ '[' -n 192.168.24.3 ']'\n+ is_local_ip 192.168.24.3\n+ local IP_TO_CHECK=192.168.24.3\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.3/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\n+ _ping=ping\n+ [[ 192.168.24.3 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.3\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-06-22 09:06:11,135] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa\n\n[2018-06-22 09:06:11,138] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:06:11,139] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.json < /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.notify.json\n[2018-06-22 09:06:11,538] (heat-config) [INFO] \n[2018-06-22 09:06:11,538] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:05:56,745] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.json", "[2018-06-22 09:06:11,138] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 09:05:57 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 09:05:57 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 09:05:57 AM] [INFO] Not using any mapping file.\\n[2018/06/22 09:05:57 AM] [INFO] Finding active nics\\n[2018/06/22 09:05:57 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 09:05:57 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 09:05:57 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 09:05:57 AM] [INFO] lo is not an active nic\\n[2018/06/22 09:05:57 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 09:05:57 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 09:05:57 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 09:05:57 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 09:05:57 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 09:05:57 AM] [INFO] adding interface: eth0\\n[2018/06/22 09:05:57 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 09:05:57 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] adding interface: eth1\\n[2018/06/22 09:05:57 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 09:05:57 AM] [INFO] adding vlan: vlan40\\n[2018/06/22 09:05:57 AM] [INFO] applying network configs...\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 09:05:57 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 09:05:58 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 09:06:02 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:06:06 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 09:06:10 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:06:10 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-06-22 09:06:11,138] (heat-config) [DEBUG] [2018-06-22 09:05:56,764] (heat-config) [INFO] interface_name=nic1", "[2018-06-22 09:05:56,764] (heat-config) [INFO] bridge_name=br-ex", "[2018-06-22 09:05:56,764] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", "[2018-06-22 09:05:56,764] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:05:56,764] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-dcrpu75ghvmg-0-jybo3u4pnq7o-NetworkDeployment-kdxxldjgvahy-TripleOSoftwareDeployment-owarhab7awno/526eb1d9-c967-46a7-9d09-85871ebc086e", "[2018-06-22 09:05:56,765] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:05:56,765] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:05:56,765] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa", "[2018-06-22 09:06:11,134] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", "", "[2018-06-22 09:06:11,135] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/06/22 09:05:57 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/06/22 09:05:57 AM] [INFO] Ifcfg net config provider created.", "[2018/06/22 09:05:57 AM] [INFO] Not using any mapping file.", "[2018/06/22 09:05:57 AM] [INFO] Finding active nics", "[2018/06/22 09:05:57 AM] [INFO] eth1 is an embedded active nic", "[2018/06/22 09:05:57 AM] [INFO] eth0 is an embedded active nic", "[2018/06/22 09:05:57 AM] [INFO] eth2 is an embedded active nic", "[2018/06/22 09:05:57 AM] [INFO] lo is not an active nic", "[2018/06/22 09:05:57 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/06/22 09:05:57 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/06/22 09:05:57 AM] [INFO] nic3 mapped to: eth2", "[2018/06/22 09:05:57 AM] [INFO] nic2 mapped to: eth1", "[2018/06/22 09:05:57 AM] [INFO] nic1 mapped to: eth0", "[2018/06/22 09:05:57 AM] [INFO] adding interface: eth0", "[2018/06/22 09:05:57 AM] [INFO] adding custom route for interface: eth0", "[2018/06/22 09:05:57 AM] [INFO] adding bridge: br-isolated", "[2018/06/22 09:05:57 AM] [INFO] adding interface: eth1", "[2018/06/22 09:05:57 AM] [INFO] adding vlan: vlan30", "[2018/06/22 09:05:57 AM] [INFO] adding vlan: vlan40", "[2018/06/22 09:05:57 AM] [INFO] applying network configs...", "[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: eth1", "[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: eth0", "[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan30", "[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan40", "[2018/06/22 09:05:57 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/06/22 09:05:57 AM] [INFO] running ifup on bridge: br-isolated", "[2018/06/22 09:05:57 AM] [INFO] running ifup on interface: eth1", "[2018/06/22 09:05:58 AM] [INFO] running ifup on interface: eth0", "[2018/06/22 09:06:02 AM] [INFO] running ifup on interface: vlan30", "[2018/06/22 09:06:06 AM] [INFO] running ifup on interface: vlan40", "[2018/06/22 09:06:10 AM] [INFO] running ifup on interface: vlan30", "[2018/06/22 09:06:10 AM] [INFO] running ifup on interface: vlan40", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.3", "++ '[' -n 192.168.24.3 ']'", "++ break", "++ echo 192.168.24.3", "+ local METADATA_IP=192.168.24.3", "+ '[' -n 192.168.24.3 ']'", "+ is_local_ip 192.168.24.3", "+ local IP_TO_CHECK=192.168.24.3", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.3/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", "+ _ping=ping", "+ [[ 192.168.24.3 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.3", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-06-22 09:06:11,135] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa", "", "[2018-06-22 09:06:11,138] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:06:11,139] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.json < /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.notify.json", "[2018-06-22 09:06:11,538] (heat-config) [INFO] ", "[2018-06-22 09:06:11,538] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:06:11,569 p=21516 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-06-22 09:06:11,620 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:05:56,745] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.json", > "[2018-06-22 09:06:11,138] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.3...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.10/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/06/22 09:05:57 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/06/22 09:05:57 AM] [INFO] Ifcfg net config provider created.\\n[2018/06/22 09:05:57 AM] [INFO] Not using any mapping file.\\n[2018/06/22 09:05:57 AM] [INFO] Finding active nics\\n[2018/06/22 09:05:57 AM] [INFO] eth1 is an embedded active nic\\n[2018/06/22 09:05:57 AM] [INFO] eth0 is an embedded active nic\\n[2018/06/22 09:05:57 AM] [INFO] eth2 is an embedded active nic\\n[2018/06/22 09:05:57 AM] [INFO] lo is not an active nic\\n[2018/06/22 09:05:57 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/06/22 09:05:57 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/06/22 09:05:57 AM] [INFO] nic3 mapped to: eth2\\n[2018/06/22 09:05:57 AM] [INFO] nic2 mapped to: eth1\\n[2018/06/22 09:05:57 AM] [INFO] nic1 mapped to: eth0\\n[2018/06/22 09:05:57 AM] [INFO] adding interface: eth0\\n[2018/06/22 09:05:57 AM] [INFO] adding custom route for interface: eth0\\n[2018/06/22 09:05:57 AM] [INFO] adding bridge: br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] adding interface: eth1\\n[2018/06/22 09:05:57 AM] [INFO] adding vlan: vlan30\\n[2018/06/22 09:05:57 AM] [INFO] adding vlan: vlan40\\n[2018/06/22 09:05:57 AM] [INFO] applying network configs...\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: eth1\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: eth0\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan30\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan40\\n[2018/06/22 09:05:57 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/06/22 09:05:57 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/06/22 09:05:57 AM] [INFO] running ifup on interface: eth1\\n[2018/06/22 09:05:58 AM] [INFO] running ifup on interface: eth0\\n[2018/06/22 09:06:02 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:06:06 AM] [INFO] running ifup on interface: vlan40\\n[2018/06/22 09:06:10 AM] [INFO] running ifup on interface: vlan30\\n[2018/06/22 09:06:10 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.3\\n++ '[' -n 192.168.24.3 ']'\\n++ break\\n++ echo 192.168.24.3\\n+ local METADATA_IP=192.168.24.3\\n+ '[' -n 192.168.24.3 ']'\\n+ is_local_ip 192.168.24.3\\n+ local IP_TO_CHECK=192.168.24.3\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.3/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.3...'\\n+ _ping=ping\\n+ [[ 192.168.24.3 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.3\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-06-22 09:06:11,138] (heat-config) [DEBUG] [2018-06-22 09:05:56,764] (heat-config) [INFO] interface_name=nic1", > "[2018-06-22 09:05:56,764] (heat-config) [INFO] bridge_name=br-ex", > "[2018-06-22 09:05:56,764] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", > "[2018-06-22 09:05:56,764] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:05:56,764] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-dcrpu75ghvmg-0-jybo3u4pnq7o-NetworkDeployment-kdxxldjgvahy-TripleOSoftwareDeployment-owarhab7awno/526eb1d9-c967-46a7-9d09-85871ebc086e", > "[2018-06-22 09:05:56,765] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:05:56,765] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:05:56,765] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa", > "[2018-06-22 09:06:11,134] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.3...SUCCESS", > "", > "[2018-06-22 09:06:11,135] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/06/22 09:05:57 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/06/22 09:05:57 AM] [INFO] Ifcfg net config provider created.", > "[2018/06/22 09:05:57 AM] [INFO] Not using any mapping file.", > "[2018/06/22 09:05:57 AM] [INFO] Finding active nics", > "[2018/06/22 09:05:57 AM] [INFO] eth1 is an embedded active nic", > "[2018/06/22 09:05:57 AM] [INFO] eth0 is an embedded active nic", > "[2018/06/22 09:05:57 AM] [INFO] eth2 is an embedded active nic", > "[2018/06/22 09:05:57 AM] [INFO] lo is not an active nic", > "[2018/06/22 09:05:57 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/06/22 09:05:57 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/06/22 09:05:57 AM] [INFO] nic3 mapped to: eth2", > "[2018/06/22 09:05:57 AM] [INFO] nic2 mapped to: eth1", > "[2018/06/22 09:05:57 AM] [INFO] nic1 mapped to: eth0", > "[2018/06/22 09:05:57 AM] [INFO] adding interface: eth0", > "[2018/06/22 09:05:57 AM] [INFO] adding custom route for interface: eth0", > "[2018/06/22 09:05:57 AM] [INFO] adding bridge: br-isolated", > "[2018/06/22 09:05:57 AM] [INFO] adding interface: eth1", > "[2018/06/22 09:05:57 AM] [INFO] adding vlan: vlan30", > "[2018/06/22 09:05:57 AM] [INFO] adding vlan: vlan40", > "[2018/06/22 09:05:57 AM] [INFO] applying network configs...", > "[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: eth1", > "[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: eth0", > "[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan30", > "[2018/06/22 09:05:57 AM] [INFO] running ifdown on interface: vlan40", > "[2018/06/22 09:05:57 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/06/22 09:05:57 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/06/22 09:05:57 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/06/22 09:05:57 AM] [INFO] running ifup on interface: eth1", > "[2018/06/22 09:05:58 AM] [INFO] running ifup on interface: eth0", > "[2018/06/22 09:06:02 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/22 09:06:06 AM] [INFO] running ifup on interface: vlan40", > "[2018/06/22 09:06:10 AM] [INFO] running ifup on interface: vlan30", > "[2018/06/22 09:06:10 AM] [INFO] running ifup on interface: vlan40", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.3", > "++ '[' -n 192.168.24.3 ']'", > "++ break", > "++ echo 192.168.24.3", > "+ local METADATA_IP=192.168.24.3", > "+ '[' -n 192.168.24.3 ']'", > "+ is_local_ip 192.168.24.3", > "+ local IP_TO_CHECK=192.168.24.3", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.3/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.3...'", > "+ _ping=ping", > "+ [[ 192.168.24.3 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.3", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-06-22 09:06:11,135] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa", > "", > "[2018-06-22 09:06:11,138] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:06:11,139] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.json < /var/lib/heat-config/deployed/3f5d31fd-1ed4-43e5-9d1a-3866348fbafa.notify.json", > "[2018-06-22 09:06:11,538] (heat-config) [INFO] ", > "[2018-06-22 09:06:11,538] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:06:11,639 p=21516 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-06-22 09:06:11,658 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:11,675 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:06:11,725 p=21516 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "a9360f96-7faf-4ae7-aa0f-2872378a2e1d"}, "changed": false} >2018-06-22 09:06:11,743 p=21516 u=mistral | TASK [Render deployment file for CephStorageUpgradeInitDeployment] ************* >2018-06-22 09:06:12,314 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "a3bc96cd0c639fa628eb12d5d1fa19becd1c23a1", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageUpgradeInitDeployment-a9360f96-7faf-4ae7-aa0f-2872378a2e1d", "gid": 0, "group": "root", "md5sum": "ca7b93ad6f5f332d8a96e1ed40edeed9", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1186, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672771.79-21622732578540/source", "state": "file", "uid": 0} >2018-06-22 09:06:12,332 p=21516 u=mistral | TASK [Check if deployed file exists for CephStorageUpgradeInitDeployment] ****** >2018-06-22 09:06:12,634 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:06:12,652 p=21516 u=mistral | TASK [Check previous deployment rc for CephStorageUpgradeInitDeployment] ******* >2018-06-22 09:06:12,669 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:12,687 p=21516 u=mistral | TASK [Remove deployed file for CephStorageUpgradeInitDeployment when previous deployment failed] *** >2018-06-22 09:06:12,704 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:12,722 p=21516 u=mistral | TASK [Force remove deployed file for CephStorageUpgradeInitDeployment] ********* >2018-06-22 09:06:12,738 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:12,757 p=21516 u=mistral | TASK [Run deployment CephStorageUpgradeInitDeployment] ************************* >2018-06-22 09:06:13,517 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.notify.json)", "delta": "0:00:00.455018", "end": "2018-06-22 09:06:13.531850", "rc": 0, "start": "2018-06-22 09:06:13.076832", "stderr": "[2018-06-22 09:06:13,100] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.json\n[2018-06-22 09:06:13,126] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:06:13,126] (heat-config) [DEBUG] [2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96\n[2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-dcrpu75ghvmg-0-jybo3u4pnq7o-CephStorageUpgradeInitDeployment-z5642y5lqq33/780ea694-dbee-4330-a99e-f1a6d9a4d1d9\n[2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:06:13,121] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a9360f96-7faf-4ae7-aa0f-2872378a2e1d\n[2018-06-22 09:06:13,123] (heat-config) [INFO] \n[2018-06-22 09:06:13,123] (heat-config) [DEBUG] \n[2018-06-22 09:06:13,123] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a9360f96-7faf-4ae7-aa0f-2872378a2e1d\n\n[2018-06-22 09:06:13,126] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:06:13,126] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.json < /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.notify.json\n[2018-06-22 09:06:13,526] (heat-config) [INFO] \n[2018-06-22 09:06:13,526] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:06:13,100] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.json", "[2018-06-22 09:06:13,126] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:06:13,126] (heat-config) [DEBUG] [2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", "[2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-dcrpu75ghvmg-0-jybo3u4pnq7o-CephStorageUpgradeInitDeployment-z5642y5lqq33/780ea694-dbee-4330-a99e-f1a6d9a4d1d9", "[2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:06:13,121] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a9360f96-7faf-4ae7-aa0f-2872378a2e1d", "[2018-06-22 09:06:13,123] (heat-config) [INFO] ", "[2018-06-22 09:06:13,123] (heat-config) [DEBUG] ", "[2018-06-22 09:06:13,123] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a9360f96-7faf-4ae7-aa0f-2872378a2e1d", "", "[2018-06-22 09:06:13,126] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:06:13,126] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.json < /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.notify.json", "[2018-06-22 09:06:13,526] (heat-config) [INFO] ", "[2018-06-22 09:06:13,526] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:06:13,537 p=21516 u=mistral | TASK [Output for CephStorageUpgradeInitDeployment] ***************************** >2018-06-22 09:06:13,582 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:06:13,100] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.json", > "[2018-06-22 09:06:13,126] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:06:13,126] (heat-config) [DEBUG] [2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", > "[2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-dcrpu75ghvmg-0-jybo3u4pnq7o-CephStorageUpgradeInitDeployment-z5642y5lqq33/780ea694-dbee-4330-a99e-f1a6d9a4d1d9", > "[2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:06:13,120] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:06:13,121] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/a9360f96-7faf-4ae7-aa0f-2872378a2e1d", > "[2018-06-22 09:06:13,123] (heat-config) [INFO] ", > "[2018-06-22 09:06:13,123] (heat-config) [DEBUG] ", > "[2018-06-22 09:06:13,123] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/a9360f96-7faf-4ae7-aa0f-2872378a2e1d", > "", > "[2018-06-22 09:06:13,126] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:06:13,126] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.json < /var/lib/heat-config/deployed/a9360f96-7faf-4ae7-aa0f-2872378a2e1d.notify.json", > "[2018-06-22 09:06:13,526] (heat-config) [INFO] ", > "[2018-06-22 09:06:13,526] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:06:13,601 p=21516 u=mistral | TASK [Check-mode for Run deployment CephStorageUpgradeInitDeployment] ********** >2018-06-22 09:06:13,613 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:13,630 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:06:13,714 p=21516 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "ef029c42-2d2f-415f-9bcb-619d07293bc4"}, "changed": false} >2018-06-22 09:06:13,731 p=21516 u=mistral | TASK [Render deployment file for CephStorageDeployment] ************************ >2018-06-22 09:06:14,334 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "ca58089851564f39ffcfafbd040ab9c688eb21ab", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageDeployment-ef029c42-2d2f-415f-9bcb-619d07293bc4", "gid": 0, "group": "root", "md5sum": "14a40e768d13597d21eeb34cb51f5f0a", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9062, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672773.82-211777910940698/source", "state": "file", "uid": 0} >2018-06-22 09:06:14,351 p=21516 u=mistral | TASK [Check if deployed file exists for CephStorageDeployment] ***************** >2018-06-22 09:06:14,644 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:06:14,664 p=21516 u=mistral | TASK [Check previous deployment rc for CephStorageDeployment] ****************** >2018-06-22 09:06:14,681 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:14,698 p=21516 u=mistral | TASK [Remove deployed file for CephStorageDeployment when previous deployment failed] *** >2018-06-22 09:06:14,714 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:14,731 p=21516 u=mistral | TASK [Force remove deployed file for CephStorageDeployment] ******************** >2018-06-22 09:06:14,746 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:14,763 p=21516 u=mistral | TASK [Run deployment CephStorageDeployment] ************************************ >2018-06-22 09:06:15,609 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.notify.json)", "delta": "0:00:00.539299", "end": "2018-06-22 09:06:15.626033", "rc": 0, "start": "2018-06-22 09:06:15.086734", "stderr": "[2018-06-22 09:06:15,110] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.json\n[2018-06-22 09:06:15,227] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:06:15,227] (heat-config) [DEBUG] \n[2018-06-22 09:06:15,227] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-22 09:06:15,228] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.json < /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.notify.json\n[2018-06-22 09:06:15,620] (heat-config) [INFO] \n[2018-06-22 09:06:15,620] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:06:15,110] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.json", "[2018-06-22 09:06:15,227] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:06:15,227] (heat-config) [DEBUG] ", "[2018-06-22 09:06:15,227] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-22 09:06:15,228] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.json < /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.notify.json", "[2018-06-22 09:06:15,620] (heat-config) [INFO] ", "[2018-06-22 09:06:15,620] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:06:15,630 p=21516 u=mistral | TASK [Output for CephStorageDeployment] **************************************** >2018-06-22 09:06:15,675 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:06:15,110] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.json", > "[2018-06-22 09:06:15,227] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:06:15,227] (heat-config) [DEBUG] ", > "[2018-06-22 09:06:15,227] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-22 09:06:15,228] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.json < /var/lib/heat-config/deployed/ef029c42-2d2f-415f-9bcb-619d07293bc4.notify.json", > "[2018-06-22 09:06:15,620] (heat-config) [INFO] ", > "[2018-06-22 09:06:15,620] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:06:15,692 p=21516 u=mistral | TASK [Check-mode for Run deployment CephStorageDeployment] ********************* >2018-06-22 09:06:15,705 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:15,722 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:06:15,772 p=21516 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9"}, "changed": false} >2018-06-22 09:06:15,791 p=21516 u=mistral | TASK [Render deployment file for CephStorageHostsDeployment] ******************* >2018-06-22 09:06:16,339 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "776cda523bf9267d9e0bff262f11545b2d9ff122", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostsDeployment-20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9", "gid": 0, "group": "root", "md5sum": "a0c7eb8cb4afd8ccf003e2d65228f716", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4087, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672775.84-37860704177512/source", "state": "file", "uid": 0} >2018-06-22 09:06:16,359 p=21516 u=mistral | TASK [Check if deployed file exists for CephStorageHostsDeployment] ************ >2018-06-22 09:06:16,656 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:06:16,677 p=21516 u=mistral | TASK [Check previous deployment rc for CephStorageHostsDeployment] ************* >2018-06-22 09:06:16,694 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:16,713 p=21516 u=mistral | TASK [Remove deployed file for CephStorageHostsDeployment when previous deployment failed] *** >2018-06-22 09:06:16,730 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:16,748 p=21516 u=mistral | TASK [Force remove deployed file for CephStorageHostsDeployment] *************** >2018-06-22 09:06:16,763 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:16,782 p=21516 u=mistral | TASK [Run deployment CephStorageHostsDeployment] ******************************* >2018-06-22 09:06:17,555 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.notify.json)", "delta": "0:00:00.446365", "end": "2018-06-22 09:06:17.545831", "rc": 0, "start": "2018-06-22 09:06:17.099466", "stderr": "[2018-06-22 09:06:17,121] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.json\n[2018-06-22 09:06:17,154] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-06-22 09:06:17,155] (heat-config) [DEBUG] [2018-06-22 09:06:17,141] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96\n[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-2ltrnux7xsrp-0-mxavcgxnktsu/561330f0-d056-44bf-beb3-da80a7f0871d\n[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:06:17,141] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9\n[2018-06-22 09:06:17,151] (heat-config) [INFO] \n[2018-06-22 09:06:17,151] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\n172.17.3.15 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.17 overcloud.internalapi.localdomain\n10.0.0.110 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.104 controller-0.external.localdomain controller-0.external\n192.168.24.8 controller-0.management.localdomain controller-0.management\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.21 compute-0.localdomain compute-0\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.15 compute-0.external.localdomain compute-0.external\n192.168.24.15 compute-0.management.localdomain compute-0.management\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.14 ceph-0.localdomain ceph-0\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-06-22 09:06:17,151] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9\n\n[2018-06-22 09:06:17,155] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:06:17,155] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.json < /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.notify.json\n[2018-06-22 09:06:17,539] (heat-config) [INFO] \n[2018-06-22 09:06:17,540] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:06:17,121] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.json", "[2018-06-22 09:06:17,154] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-06-22 09:06:17,155] (heat-config) [DEBUG] [2018-06-22 09:06:17,141] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", "[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-2ltrnux7xsrp-0-mxavcgxnktsu/561330f0-d056-44bf-beb3-da80a7f0871d", "[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:06:17,141] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9", "[2018-06-22 09:06:17,151] (heat-config) [INFO] ", "[2018-06-22 09:06:17,151] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", "172.17.3.15 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.17 overcloud.internalapi.localdomain", "10.0.0.110 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.18 controller-0.storage.localdomain controller-0.storage", "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.104 controller-0.external.localdomain controller-0.external", "192.168.24.8 controller-0.management.localdomain controller-0.management", "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.21 compute-0.localdomain compute-0", "172.17.3.10 compute-0.storage.localdomain compute-0.storage", "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.15 compute-0.external.localdomain compute-0.external", "192.168.24.15 compute-0.management.localdomain compute-0.management", "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.14 ceph-0.localdomain ceph-0", "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.10 ceph-0.external.localdomain ceph-0.external", "192.168.24.10 ceph-0.management.localdomain ceph-0.management", "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-06-22 09:06:17,151] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9", "", "[2018-06-22 09:06:17,155] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:06:17,155] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.json < /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.notify.json", "[2018-06-22 09:06:17,539] (heat-config) [INFO] ", "[2018-06-22 09:06:17,540] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:06:17,581 p=21516 u=mistral | TASK [Output for CephStorageHostsDeployment] *********************************** >2018-06-22 09:06:17,651 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:06:17,121] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.json", > "[2018-06-22 09:06:17,154] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.14 overcloud.ctlplane.localdomain\\n172.17.3.15 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.17 overcloud.internalapi.localdomain\\n10.0.0.110 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.18 controller-0.storage.localdomain controller-0.storage\\n172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.15 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.104 controller-0.external.localdomain controller-0.external\\n192.168.24.8 controller-0.management.localdomain controller-0.management\\n192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.21 compute-0.localdomain compute-0\\n172.17.3.10 compute-0.storage.localdomain compute-0.storage\\n192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.10 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.15 compute-0.external.localdomain compute-0.external\\n192.168.24.15 compute-0.management.localdomain compute-0.management\\n192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.14 ceph-0.localdomain ceph-0\\n172.17.3.14 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.10 ceph-0.external.localdomain ceph-0.external\\n192.168.24.10 ceph-0.management.localdomain ceph-0.management\\n192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-06-22 09:06:17,155] (heat-config) [DEBUG] [2018-06-22 09:06:17,141] (heat-config) [INFO] hosts=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", > "[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-2ltrnux7xsrp-0-mxavcgxnktsu/561330f0-d056-44bf-beb3-da80a7f0871d", > "[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:06:17,141] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:06:17,141] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9", > "[2018-06-22 09:06:17,151] (heat-config) [INFO] ", > "[2018-06-22 09:06:17,151] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.14 overcloud.ctlplane.localdomain", > "172.17.3.15 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.17 overcloud.internalapi.localdomain", > "10.0.0.110 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.18 controller-0.storage.localdomain controller-0.storage", > "172.17.4.17 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.15 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.104 controller-0.external.localdomain controller-0.external", > "192.168.24.8 controller-0.management.localdomain controller-0.management", > "192.168.24.8 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.21 compute-0.localdomain compute-0", > "172.17.3.10 compute-0.storage.localdomain compute-0.storage", > "192.168.24.15 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.21 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.10 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.15 compute-0.external.localdomain compute-0.external", > "192.168.24.15 compute-0.management.localdomain compute-0.management", > "192.168.24.15 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.14 ceph-0.localdomain ceph-0", > "172.17.3.14 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.16 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.10 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.10 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.10 ceph-0.external.localdomain ceph-0.external", > "192.168.24.10 ceph-0.management.localdomain ceph-0.management", > "192.168.24.10 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-06-22 09:06:17,151] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9", > "", > "[2018-06-22 09:06:17,155] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:06:17,155] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.json < /var/lib/heat-config/deployed/20d1b4a8-b52c-441a-8ce4-973d7eb1d0a9.notify.json", > "[2018-06-22 09:06:17,539] (heat-config) [INFO] ", > "[2018-06-22 09:06:17,540] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:06:17,677 p=21516 u=mistral | TASK [Check-mode for Run deployment CephStorageHostsDeployment] **************** >2018-06-22 09:06:17,691 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:17,709 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:06:17,889 p=21516 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "20da6b8d-d43f-401a-86f5-ce717cbbd17b"}, "changed": false} >2018-06-22 09:06:17,908 p=21516 u=mistral | TASK [Render deployment file for CephStorageAllNodesDeployment] **************** >2018-06-22 09:06:18,636 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "8416c8ef402d96e235570eccb103ee029f84c940", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesDeployment-20da6b8d-d43f-401a-86f5-ce717cbbd17b", "gid": 0, "group": "root", "md5sum": "431467e14a3baa5af3ebef350d71bc28", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19024, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672778.11-247798765050692/source", "state": "file", "uid": 0} >2018-06-22 09:06:18,653 p=21516 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesDeployment] ********* >2018-06-22 09:06:19,010 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:06:19,029 p=21516 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesDeployment] ********** >2018-06-22 09:06:19,047 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:19,064 p=21516 u=mistral | TASK [Remove deployed file for CephStorageAllNodesDeployment when previous deployment failed] *** >2018-06-22 09:06:19,080 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:19,098 p=21516 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesDeployment] ************ >2018-06-22 09:06:19,113 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:19,131 p=21516 u=mistral | TASK [Run deployment CephStorageAllNodesDeployment] **************************** >2018-06-22 09:06:20,020 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/20da6b8d-d43f-401a-86f5-ce717cbbd17b.notify.json)", "delta": "0:00:00.525349", "end": "2018-06-22 09:06:20.032369", "rc": 0, "start": "2018-06-22 09:06:19.507020", "stderr": "[2018-06-22 09:06:19,529] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/20da6b8d-d43f-401a-86f5-ce717cbbd17b.json\n[2018-06-22 09:06:19,642] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:06:19,642] (heat-config) [DEBUG] \n[2018-06-22 09:06:19,642] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-06-22 09:06:19,642] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/20da6b8d-d43f-401a-86f5-ce717cbbd17b.json < /var/lib/heat-config/deployed/20da6b8d-d43f-401a-86f5-ce717cbbd17b.notify.json\n[2018-06-22 09:06:20,026] (heat-config) [INFO] \n[2018-06-22 09:06:20,026] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:06:19,529] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/20da6b8d-d43f-401a-86f5-ce717cbbd17b.json", "[2018-06-22 09:06:19,642] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:06:19,642] (heat-config) [DEBUG] ", "[2018-06-22 09:06:19,642] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-06-22 09:06:19,642] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/20da6b8d-d43f-401a-86f5-ce717cbbd17b.json < /var/lib/heat-config/deployed/20da6b8d-d43f-401a-86f5-ce717cbbd17b.notify.json", "[2018-06-22 09:06:20,026] (heat-config) [INFO] ", "[2018-06-22 09:06:20,026] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:06:20,074 p=21516 u=mistral | TASK [Output for CephStorageAllNodesDeployment] ******************************** >2018-06-22 09:06:20,119 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:06:19,529] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/20da6b8d-d43f-401a-86f5-ce717cbbd17b.json", > "[2018-06-22 09:06:19,642] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:06:19,642] (heat-config) [DEBUG] ", > "[2018-06-22 09:06:19,642] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-06-22 09:06:19,642] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/20da6b8d-d43f-401a-86f5-ce717cbbd17b.json < /var/lib/heat-config/deployed/20da6b8d-d43f-401a-86f5-ce717cbbd17b.notify.json", > "[2018-06-22 09:06:20,026] (heat-config) [INFO] ", > "[2018-06-22 09:06:20,026] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:06:20,137 p=21516 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesDeployment] ************* >2018-06-22 09:06:20,150 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:20,166 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:06:20,219 p=21516 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "56a88551-9d40-42c1-b9c3-82e6b1c065ac"}, "changed": false} >2018-06-22 09:06:20,237 p=21516 u=mistral | TASK [Render deployment file for CephStorageAllNodesValidationDeployment] ****** >2018-06-22 09:06:20,793 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "5d2cc31e9941f5a265d39a4201f859e00bda2848", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesValidationDeployment-56a88551-9d40-42c1-b9c3-82e6b1c065ac", "gid": 0, "group": "root", "md5sum": "410f6c2ae27e03ae95d9fb6d21a7cfbb", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4942, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672780.29-53356270332024/source", "state": "file", "uid": 0} >2018-06-22 09:06:20,811 p=21516 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesValidationDeployment] *** >2018-06-22 09:06:21,104 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:06:21,123 p=21516 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesValidationDeployment] *** >2018-06-22 09:06:21,143 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:21,164 p=21516 u=mistral | TASK [Remove deployed file for CephStorageAllNodesValidationDeployment when previous deployment failed] *** >2018-06-22 09:06:21,180 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:21,198 p=21516 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesValidationDeployment] *** >2018-06-22 09:06:21,213 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:21,231 p=21516 u=mistral | TASK [Run deployment CephStorageAllNodesValidationDeployment] ****************** >2018-06-22 09:06:22,412 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.notify.json)", "delta": "0:00:00.885675", "end": "2018-06-22 09:06:22.426668", "rc": 0, "start": "2018-06-22 09:06:21.540993", "stderr": "[2018-06-22 09:06:21,563] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.json\n[2018-06-22 09:06:22,057] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\\nPing to 172.17.4.17 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:06:22,057] (heat-config) [DEBUG] [2018-06-22 09:06:21,582] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8\n[2018-06-22 09:06:21,582] (heat-config) [INFO] validate_fqdn=False\n[2018-06-22 09:06:21,582] (heat-config) [INFO] validate_ntp=True\n[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96\n[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-hkjyyirum7ne-0-t433fatyktkn/ab0eaf14-3185-4d7e-835d-9f30093889bb\n[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:06:21,583] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/56a88551-9d40-42c1-b9c3-82e6b1c065ac\n[2018-06-22 09:06:22,053] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\nPing to 10.0.0.104 succeeded.\nSUCCESS\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\nPing to 172.17.3.18 succeeded.\nSUCCESS\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\nPing to 172.17.4.17 succeeded.\nSUCCESS\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\nPing to 192.168.24.8 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-06-22 09:06:22,053] (heat-config) [DEBUG] \n[2018-06-22 09:06:22,053] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/56a88551-9d40-42c1-b9c3-82e6b1c065ac\n\n[2018-06-22 09:06:22,057] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:06:22,057] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.json < /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.notify.json\n[2018-06-22 09:06:22,421] (heat-config) [INFO] \n[2018-06-22 09:06:22,421] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:06:21,563] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.json", "[2018-06-22 09:06:22,057] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\\nPing to 172.17.4.17 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:06:22,057] (heat-config) [DEBUG] [2018-06-22 09:06:21,582] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8", "[2018-06-22 09:06:21,582] (heat-config) [INFO] validate_fqdn=False", "[2018-06-22 09:06:21,582] (heat-config) [INFO] validate_ntp=True", "[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", "[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-hkjyyirum7ne-0-t433fatyktkn/ab0eaf14-3185-4d7e-835d-9f30093889bb", "[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:06:21,583] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/56a88551-9d40-42c1-b9c3-82e6b1c065ac", "[2018-06-22 09:06:22,053] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.", "Ping to 10.0.0.104 succeeded.", "SUCCESS", "Trying to ping 172.17.3.18 for local network 172.17.3.0/24.", "Ping to 172.17.3.18 succeeded.", "SUCCESS", "Trying to ping 172.17.4.17 for local network 172.17.4.0/24.", "Ping to 172.17.4.17 succeeded.", "SUCCESS", "Trying to ping 192.168.24.8 for local network 192.168.24.0/24.", "Ping to 192.168.24.8 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-06-22 09:06:22,053] (heat-config) [DEBUG] ", "[2018-06-22 09:06:22,053] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/56a88551-9d40-42c1-b9c3-82e6b1c065ac", "", "[2018-06-22 09:06:22,057] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:06:22,057] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.json < /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.notify.json", "[2018-06-22 09:06:22,421] (heat-config) [INFO] ", "[2018-06-22 09:06:22,421] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:06:22,433 p=21516 u=mistral | TASK [Output for CephStorageAllNodesValidationDeployment] ********************** >2018-06-22 09:06:22,481 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:06:21,563] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.json", > "[2018-06-22 09:06:22,057] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.104 for local network 10.0.0.0/24.\\nPing to 10.0.0.104 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.18 for local network 172.17.3.0/24.\\nPing to 172.17.3.18 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.17 for local network 172.17.4.0/24.\\nPing to 172.17.4.17 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.8 for local network 192.168.24.0/24.\\nPing to 192.168.24.8 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:06:22,057] (heat-config) [DEBUG] [2018-06-22 09:06:21,582] (heat-config) [INFO] ping_test_ips=172.17.3.18 172.17.4.17 172.17.1.16 172.17.2.15 10.0.0.104 192.168.24.8", > "[2018-06-22 09:06:21,582] (heat-config) [INFO] validate_fqdn=False", > "[2018-06-22 09:06:21,582] (heat-config) [INFO] validate_ntp=True", > "[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", > "[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-hkjyyirum7ne-0-t433fatyktkn/ab0eaf14-3185-4d7e-835d-9f30093889bb", > "[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:06:21,582] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:06:21,583] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/56a88551-9d40-42c1-b9c3-82e6b1c065ac", > "[2018-06-22 09:06:22,053] (heat-config) [INFO] Trying to ping 10.0.0.104 for local network 10.0.0.0/24.", > "Ping to 10.0.0.104 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.18 for local network 172.17.3.0/24.", > "Ping to 172.17.3.18 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.17 for local network 172.17.4.0/24.", > "Ping to 172.17.4.17 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.8 for local network 192.168.24.0/24.", > "Ping to 192.168.24.8 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-06-22 09:06:22,053] (heat-config) [DEBUG] ", > "[2018-06-22 09:06:22,053] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/56a88551-9d40-42c1-b9c3-82e6b1c065ac", > "", > "[2018-06-22 09:06:22,057] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:06:22,057] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.json < /var/lib/heat-config/deployed/56a88551-9d40-42c1-b9c3-82e6b1c065ac.notify.json", > "[2018-06-22 09:06:22,421] (heat-config) [INFO] ", > "[2018-06-22 09:06:22,421] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:06:22,502 p=21516 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesValidationDeployment] *** >2018-06-22 09:06:22,517 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:22,534 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:06:22,587 p=21516 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "137bc291-65a7-434a-8973-d5bc9ed3db0b"}, "changed": false} >2018-06-22 09:06:22,607 p=21516 u=mistral | TASK [Render deployment file for CephStorageArtifactsDeploy] ******************* >2018-06-22 09:06:23,133 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "f45c0846939b94eb8c667836bed68361dbb5d65c", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageArtifactsDeploy-137bc291-65a7-434a-8973-d5bc9ed3db0b", "gid": 0, "group": "root", "md5sum": "f3593a409ddcc0d1373765e331e25c01", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2023, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672782.65-40852555949214/source", "state": "file", "uid": 0} >2018-06-22 09:06:23,153 p=21516 u=mistral | TASK [Check if deployed file exists for CephStorageArtifactsDeploy] ************ >2018-06-22 09:06:23,442 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:06:23,462 p=21516 u=mistral | TASK [Check previous deployment rc for CephStorageArtifactsDeploy] ************* >2018-06-22 09:06:23,479 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:23,497 p=21516 u=mistral | TASK [Remove deployed file for CephStorageArtifactsDeploy when previous deployment failed] *** >2018-06-22 09:06:23,517 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:23,537 p=21516 u=mistral | TASK [Force remove deployed file for CephStorageArtifactsDeploy] *************** >2018-06-22 09:06:23,553 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:23,570 p=21516 u=mistral | TASK [Run deployment CephStorageArtifactsDeploy] ******************************* >2018-06-22 09:06:24,306 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.notify.json)", "delta": "0:00:00.438851", "end": "2018-06-22 09:06:24.322035", "rc": 0, "start": "2018-06-22 09:06:23.883184", "stderr": "[2018-06-22 09:06:23,904] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.json\n[2018-06-22 09:06:23,931] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:06:23,931] (heat-config) [DEBUG] [2018-06-22 09:06:23,923] (heat-config) [INFO] artifact_urls=\n[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96\n[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_action=CREATE\n[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-CephStorageArtifactsDeploy-2vfao6bm2v6m-0-m2us6qg4usxt/67ae6c07-53cf-4a05-91c8-d35bce337aaa\n[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-06-22 09:06:23,924] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/137bc291-65a7-434a-8973-d5bc9ed3db0b\n[2018-06-22 09:06:23,928] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-06-22 09:06:23,928] (heat-config) [DEBUG] \n[2018-06-22 09:06:23,928] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/137bc291-65a7-434a-8973-d5bc9ed3db0b\n\n[2018-06-22 09:06:23,931] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-06-22 09:06:23,931] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.json < /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.notify.json\n[2018-06-22 09:06:24,316] (heat-config) [INFO] \n[2018-06-22 09:06:24,316] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:06:23,904] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.json", "[2018-06-22 09:06:23,931] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:06:23,931] (heat-config) [DEBUG] [2018-06-22 09:06:23,923] (heat-config) [INFO] artifact_urls=", "[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", "[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_action=CREATE", "[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-CephStorageArtifactsDeploy-2vfao6bm2v6m-0-m2us6qg4usxt/67ae6c07-53cf-4a05-91c8-d35bce337aaa", "[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-06-22 09:06:23,924] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/137bc291-65a7-434a-8973-d5bc9ed3db0b", "[2018-06-22 09:06:23,928] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-06-22 09:06:23,928] (heat-config) [DEBUG] ", "[2018-06-22 09:06:23,928] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/137bc291-65a7-434a-8973-d5bc9ed3db0b", "", "[2018-06-22 09:06:23,931] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-06-22 09:06:23,931] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.json < /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.notify.json", "[2018-06-22 09:06:24,316] (heat-config) [INFO] ", "[2018-06-22 09:06:24,316] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:06:24,325 p=21516 u=mistral | TASK [Output for CephStorageArtifactsDeploy] *********************************** >2018-06-22 09:06:24,372 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:06:23,904] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.json", > "[2018-06-22 09:06:23,931] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:06:23,931] (heat-config) [DEBUG] [2018-06-22 09:06:23,923] (heat-config) [INFO] artifact_urls=", > "[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_server_id=3bfb069e-4daf-4e4f-80f5-34125cd96b96", > "[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_action=CREATE", > "[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-haw7i3vfvlpg-CephStorageArtifactsDeploy-2vfao6bm2v6m-0-m2us6qg4usxt/67ae6c07-53cf-4a05-91c8-d35bce337aaa", > "[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-06-22 09:06:23,923] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-06-22 09:06:23,924] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/137bc291-65a7-434a-8973-d5bc9ed3db0b", > "[2018-06-22 09:06:23,928] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-06-22 09:06:23,928] (heat-config) [DEBUG] ", > "[2018-06-22 09:06:23,928] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/137bc291-65a7-434a-8973-d5bc9ed3db0b", > "", > "[2018-06-22 09:06:23,931] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-06-22 09:06:23,931] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.json < /var/lib/heat-config/deployed/137bc291-65a7-434a-8973-d5bc9ed3db0b.notify.json", > "[2018-06-22 09:06:24,316] (heat-config) [INFO] ", > "[2018-06-22 09:06:24,316] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:06:24,389 p=21516 u=mistral | TASK [Check-mode for Run deployment CephStorageArtifactsDeploy] **************** >2018-06-22 09:06:24,403 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:24,420 p=21516 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-06-22 09:06:24,484 p=21516 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "fea7c44d-af59-48c5-a656-2d6660e43194"}, "changed": false} >2018-06-22 09:06:24,502 p=21516 u=mistral | TASK [Render deployment file for CephStorageHostPrepDeployment] **************** >2018-06-22 09:06:25,079 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "503e19d18dcb56bb669bfa55fcb11151a99ffcfd", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostPrepDeployment-fea7c44d-af59-48c5-a656-2d6660e43194", "gid": 0, "group": "root", "md5sum": "f0461953e64ef44ab7462881115e9c7e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19872, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672784.57-235698793144075/source", "state": "file", "uid": 0} >2018-06-22 09:06:25,097 p=21516 u=mistral | TASK [Check if deployed file exists for CephStorageHostPrepDeployment] ********* >2018-06-22 09:06:25,396 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:06:25,414 p=21516 u=mistral | TASK [Check previous deployment rc for CephStorageHostPrepDeployment] ********** >2018-06-22 09:06:25,430 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:25,448 p=21516 u=mistral | TASK [Remove deployed file for CephStorageHostPrepDeployment when previous deployment failed] *** >2018-06-22 09:06:25,465 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:25,485 p=21516 u=mistral | TASK [Force remove deployed file for CephStorageHostPrepDeployment] ************ >2018-06-22 09:06:25,500 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:25,518 p=21516 u=mistral | TASK [Run deployment CephStorageHostPrepDeployment] **************************** >2018-06-22 09:06:29,987 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.notify.json)", "delta": "0:00:04.169553", "end": "2018-06-22 09:06:30.002466", "rc": 0, "start": "2018-06-22 09:06:25.832913", "stderr": "[2018-06-22 09:06:25,855] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.json\n[2018-06-22 09:06:29,609] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-06-22 09:06:29,609] (heat-config) [DEBUG] [2018-06-22 09:06:25,875] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_variables.json\n[2018-06-22 09:06:29,605] (heat-config) [INFO] Return code 0\n[2018-06-22 09:06:29,605] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-06-22 09:06:29,605] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_playbook.yaml\n\n[2018-06-22 09:06:29,609] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-06-22 09:06:29,609] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.json < /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.notify.json\n[2018-06-22 09:06:29,996] (heat-config) [INFO] \n[2018-06-22 09:06:29,997] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-06-22 09:06:25,855] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.json", "[2018-06-22 09:06:29,609] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-06-22 09:06:29,609] (heat-config) [DEBUG] [2018-06-22 09:06:25,875] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_variables.json", "[2018-06-22 09:06:29,605] (heat-config) [INFO] Return code 0", "[2018-06-22 09:06:29,605] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-06-22 09:06:29,605] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_playbook.yaml", "", "[2018-06-22 09:06:29,609] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-06-22 09:06:29,609] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.json < /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.notify.json", "[2018-06-22 09:06:29,996] (heat-config) [INFO] ", "[2018-06-22 09:06:29,997] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-06-22 09:06:30,007 p=21516 u=mistral | TASK [Output for CephStorageHostPrepDeployment] ******************************** >2018-06-22 09:06:30,053 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "msg": [ > { > "stderr": [ > "[2018-06-22 09:06:25,855] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.json", > "[2018-06-22 09:06:29,609] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-06-22 09:06:29,609] (heat-config) [DEBUG] [2018-06-22 09:06:25,875] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_variables.json", > "[2018-06-22 09:06:29,605] (heat-config) [INFO] Return code 0", > "[2018-06-22 09:06:29,605] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-06-22 09:06:29,605] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fea7c44d-af59-48c5-a656-2d6660e43194_playbook.yaml", > "", > "[2018-06-22 09:06:29,609] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-06-22 09:06:29,609] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.json < /var/lib/heat-config/deployed/fea7c44d-af59-48c5-a656-2d6660e43194.notify.json", > "[2018-06-22 09:06:29,996] (heat-config) [INFO] ", > "[2018-06-22 09:06:29,997] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-06-22 09:06:30,072 p=21516 u=mistral | TASK [Check-mode for Run deployment CephStorageHostPrepDeployment] ************* >2018-06-22 09:06:30,085 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:30,091 p=21516 u=mistral | PLAY [Host prep steps] ********************************************************* >2018-06-22 09:06:30,126 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:30,178 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:30,179 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:30,193 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:30,197 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:30,467 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/aodh) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/aodh", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:30,771 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/aodh-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/aodh-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:30,793 p=21516 u=mistral | TASK [aodh logs readme] ******************************************************** >2018-06-22 09:06:30,845 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:30,858 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:31,388 p=21516 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b6cf6dbe054f430c33d39c1a1a88593536d6e659", "msg": "Destination directory /var/log/aodh does not exist"} >2018-06-22 09:06:31,389 p=21516 u=mistral | ...ignoring >2018-06-22 09:06:31,410 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:31,459 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:31,472 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:31,747 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:31,768 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:31,818 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:31,832 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:32,104 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:32,126 p=21516 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-06-22 09:06:32,173 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:32,186 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:32,720 p=21516 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-06-22 09:06:32,721 p=21516 u=mistral | ...ignoring >2018-06-22 09:06:32,744 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:32,794 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:32,795 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:32,812 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:32,821 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:33,094 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:33,386 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/cinder-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/cinder-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:33,407 p=21516 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-06-22 09:06:33,457 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:33,469 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:34,061 p=21516 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292", "msg": "Destination directory /var/log/cinder does not exist"} >2018-06-22 09:06:34,061 p=21516 u=mistral | ...ignoring >2018-06-22 09:06:34,083 p=21516 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 09:06:34,133 p=21516 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:34,134 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:34,146 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:34,215 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:34,478 p=21516 u=mistral | ok: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:34,788 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:34,810 p=21516 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-06-22 09:06:34,861 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:34,872 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:35,141 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:35,163 p=21516 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 09:06:35,210 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:35,226 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:35,505 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:35,527 p=21516 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 09:06:35,578 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:35,578 p=21516 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:35,598 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:35,599 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:35,884 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:36,188 p=21516 u=mistral | ok: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:36,210 p=21516 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-06-22 09:06:36,258 p=21516 u=mistral | ok: [controller-0] => {"ansible_facts": {"cinder_enable_iscsi_backend": false}, "changed": false} >2018-06-22 09:06:36,259 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:36,269 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:36,290 p=21516 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-06-22 09:06:36,319 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:36,344 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:36,354 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:36,375 p=21516 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-06-22 09:06:36,401 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:36,424 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:36,435 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:36,456 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:36,504 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:36,519 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:36,788 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/glance) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/glance", "mode": "0755", "owner": "root", "path": "/var/log/containers/glance", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:36,810 p=21516 u=mistral | TASK [glance logs readme] ****************************************************** >2018-06-22 09:06:36,861 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:36,875 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,400 p=21516 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "e368ae3272baeb19e1113009ea5dae00e797c919", "msg": "Destination directory /var/log/glance does not exist"} >2018-06-22 09:06:37,400 p=21516 u=mistral | ...ignoring >2018-06-22 09:06:37,423 p=21516 u=mistral | TASK [set_fact] **************************************************************** >2018-06-22 09:06:37,450 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,475 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,484 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,505 p=21516 u=mistral | TASK [file] ******************************************************************** >2018-06-22 09:06:37,530 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,553 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,567 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,590 p=21516 u=mistral | TASK [stat] ******************************************************************** >2018-06-22 09:06:37,615 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,637 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,647 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,668 p=21516 u=mistral | TASK [copy] ******************************************************************** >2018-06-22 09:06:37,695 p=21516 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,718 p=21516 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,730 p=21516 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,752 p=21516 u=mistral | TASK [mount] ******************************************************************* >2018-06-22 09:06:37,779 p=21516 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,806 p=21516 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,821 p=21516 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,842 p=21516 u=mistral | TASK [Mount Node Staging Location] ********************************************* >2018-06-22 09:06:37,873 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,898 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,907 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,928 p=21516 u=mistral | TASK [Mount NFS on host] ******************************************************* >2018-06-22 09:06:37,955 p=21516 u=mistral | skipping: [controller-0] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) => {"changed": false, "item": {"NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0", "NFS_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,977 p=21516 u=mistral | skipping: [compute-0] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) => {"changed": false, "item": {"NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0", "NFS_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:37,992 p=21516 u=mistral | skipping: [ceph-0] => (item={u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0', u'NFS_SHARE': u''}) => {"changed": false, "item": {"NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0", "NFS_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:38,013 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:38,060 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:38,061 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:38,072 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:38,076 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:38,357 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/gnocchi", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:38,659 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/gnocchi-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/gnocchi-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:38,682 p=21516 u=mistral | TASK [gnocchi logs readme] ***************************************************** >2018-06-22 09:06:38,729 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:38,741 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:39,283 p=21516 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "2f6114e0f135d7222e70a07579ab0b2b6f967ff8", "msg": "Destination directory /var/log/gnocchi does not exist"} >2018-06-22 09:06:39,283 p=21516 u=mistral | ...ignoring >2018-06-22 09:06:39,306 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:39,355 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:39,366 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:39,642 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:39,662 p=21516 u=mistral | TASK [get parameters] ********************************************************** >2018-06-22 09:06:39,711 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:06:39,713 p=21516 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:06:39,724 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:06:39,743 p=21516 u=mistral | TASK [get DeployedSSLCertificatePath attributes] ******************************* >2018-06-22 09:06:39,769 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:39,790 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:39,802 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:39,822 p=21516 u=mistral | TASK [Assign bootstrap node] *************************************************** >2018-06-22 09:06:39,848 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:39,870 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:39,880 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:39,901 p=21516 u=mistral | TASK [set is_bootstrap_node fact] ********************************************** >2018-06-22 09:06:39,927 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:39,955 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:39,966 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:39,986 p=21516 u=mistral | TASK [get haproxy status] ****************************************************** >2018-06-22 09:06:40,010 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,032 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,044 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,064 p=21516 u=mistral | TASK [get pacemaker status] **************************************************** >2018-06-22 09:06:40,088 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,111 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,122 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,142 p=21516 u=mistral | TASK [get docker status] ******************************************************* >2018-06-22 09:06:40,167 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,188 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,204 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,226 p=21516 u=mistral | TASK [get container_id] ******************************************************** >2018-06-22 09:06:40,251 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,274 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,284 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,304 p=21516 u=mistral | TASK [get pcs resource name for haproxy container] ***************************** >2018-06-22 09:06:40,330 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,353 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,365 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,385 p=21516 u=mistral | TASK [remove DeployedSSLCertificatePath if is dir] ***************************** >2018-06-22 09:06:40,431 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,432 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,442 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,463 p=21516 u=mistral | TASK [push certificate content] ************************************************ >2018-06-22 09:06:40,490 p=21516 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:06:40,514 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:06:40,526 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:06:40,546 p=21516 u=mistral | TASK [set certificate ownership] *********************************************** >2018-06-22 09:06:40,572 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,593 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,603 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,623 p=21516 u=mistral | TASK [reload haproxy if enabled] *********************************************** >2018-06-22 09:06:40,645 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,667 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,679 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,700 p=21516 u=mistral | TASK [restart pacemaker resource for haproxy] ********************************** >2018-06-22 09:06:40,748 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,748 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,758 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,811 p=21516 u=mistral | TASK [set kolla_dir fact] ****************************************************** >2018-06-22 09:06:40,836 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,856 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,866 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,885 p=21516 u=mistral | TASK [set certificate group on host via container] ***************************** >2018-06-22 09:06:40,907 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,929 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,938 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:40,959 p=21516 u=mistral | TASK [copy certificate from kolla directory to final location] ***************** >2018-06-22 09:06:40,981 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:41,001 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:41,011 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:41,033 p=21516 u=mistral | TASK [send restart order to haproxy container] ********************************* >2018-06-22 09:06:41,060 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:41,082 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:41,091 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:41,111 p=21516 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 09:06:41,156 p=21516 u=mistral | skipping: [compute-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:41,170 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:41,446 p=21516 u=mistral | ok: [controller-0] => (item=/var/lib/haproxy) => {"changed": false, "gid": 188, "group": "haproxy", "item": "/var/lib/haproxy", "mode": "0755", "owner": "haproxy", "path": "/var/lib/haproxy", "secontext": "system_u:object_r:haproxy_var_lib_t:s0", "size": 6, "state": "directory", "uid": 188} >2018-06-22 09:06:41,470 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:41,525 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:41,525 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:41,534 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:41,543 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:41,802 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/heat) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:42,122 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:42,143 p=21516 u=mistral | TASK [heat logs readme] ******************************************************** >2018-06-22 09:06:42,190 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:42,203 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:42,736 p=21516 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "d30ca3bda176434d31659e7379616dd162ddb246", "msg": "Destination directory /var/log/heat does not exist"} >2018-06-22 09:06:42,736 p=21516 u=mistral | ...ignoring >2018-06-22 09:06:42,758 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:42,807 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:42,808 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:42,825 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:42,833 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:43,101 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/heat) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:43,413 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api-cfn", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api-cfn", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:43,435 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:43,486 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:43,499 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:43,771 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:43,793 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:43,844 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:43,845 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:43,859 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:43,865 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:44,136 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/horizon) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:44,444 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:44,466 p=21516 u=mistral | TASK [horizon logs readme] ***************************************************** >2018-06-22 09:06:44,514 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:44,527 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:45,054 p=21516 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ac324739761cb36b925d6e309482e26f7fe49b91", "msg": "Destination directory /var/log/horizon does not exist"} >2018-06-22 09:06:45,054 p=21516 u=mistral | ...ignoring >2018-06-22 09:06:45,075 p=21516 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-06-22 09:06:45,122 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:45,134 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:45,415 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"atime": 1529672695.0693085, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1529433183.0936344, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 5335882, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "18446744072695807771", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-06-22 09:06:45,440 p=21516 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-06-22 09:06:45,488 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:45,500 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:45,899 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "-.slice sysinit.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Backlog": "128", "Before": "shutdown.target sockets.target iscsid.service", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127793", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127793", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-06-22 09:06:45,921 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:45,972 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:45,973 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:45,988 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:45,992 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:46,263 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/keystone) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:46,568 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:46,594 p=21516 u=mistral | TASK [keystone logs readme] **************************************************** >2018-06-22 09:06:46,640 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:46,655 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:47,194 p=21516 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "910be882addb6df99267e9bd303f6d9bf658562e", "msg": "Destination directory /var/log/keystone does not exist"} >2018-06-22 09:06:47,194 p=21516 u=mistral | ...ignoring >2018-06-22 09:06:47,218 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:47,269 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:47,283 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:47,546 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/memcached", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:47,566 p=21516 u=mistral | TASK [memcached logs readme] *************************************************** >2018-06-22 09:06:47,615 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:47,628 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:48,120 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "f72ee86fbe604c83734785fe970323e58e3fad9e", "dest": "/var/log/memcached-readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/memcached-readme.txt", "secontext": "system_u:object_r:var_log_t:s0", "size": 86, "state": "file", "uid": 0} >2018-06-22 09:06:48,143 p=21516 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 09:06:48,195 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:48,195 p=21516 u=mistral | skipping: [compute-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:48,209 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:48,216 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:48,478 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/mysql) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/mysql", "mode": "0755", "owner": "root", "path": "/var/log/containers/mysql", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:48,793 p=21516 u=mistral | ok: [controller-0] => (item=/var/lib/mysql) => {"changed": false, "gid": 27, "group": "mysql", "item": "/var/lib/mysql", "mode": "0755", "owner": "mysql", "path": "/var/lib/mysql", "secontext": "system_u:object_r:mysqld_db_t:s0", "size": 6, "state": "directory", "uid": 27} >2018-06-22 09:06:48,817 p=21516 u=mistral | TASK [mysql logs readme] ******************************************************* >2018-06-22 09:06:48,869 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:48,882 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:49,366 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "de8fb5fe96200ab286121f8a09419702bd693743", "dest": "/var/log/mariadb/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/mariadb/readme.txt", "secontext": "system_u:object_r:mysqld_log_t:s0", "size": 78, "state": "file", "uid": 0} >2018-06-22 09:06:49,388 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:49,439 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:49,440 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:49,457 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:49,463 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:49,723 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:50,017 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/neutron-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/neutron-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:50,040 p=21516 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-06-22 09:06:50,092 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:50,106 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:50,626 p=21516 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-06-22 09:06:50,626 p=21516 u=mistral | ...ignoring >2018-06-22 09:06:50,648 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:50,699 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:50,714 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:50,979 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:51,004 p=21516 u=mistral | TASK [create /var/lib/neutron] ************************************************* >2018-06-22 09:06:51,056 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:51,067 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:51,388 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/neutron", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:51,411 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:51,463 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:51,465 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:51,480 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:51,484 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:51,796 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:52,093 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:52,116 p=21516 u=mistral | TASK [nova logs readme] ******************************************************** >2018-06-22 09:06:52,166 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:52,184 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:52,752 p=21516 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-06-22 09:06:52,752 p=21516 u=mistral | ...ignoring >2018-06-22 09:06:52,773 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:52,825 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:52,839 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:53,100 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:53,122 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:53,168 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:53,169 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:53,181 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:53,187 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:53,458 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:53,753 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-placement", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-placement", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:53,774 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:06:53,823 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:53,824 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:53,838 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:53,842 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:54,109 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/panko) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/panko", "mode": "0755", "owner": "root", "path": "/var/log/containers/panko", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:54,418 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/panko-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/panko-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:54,440 p=21516 u=mistral | TASK [panko logs readme] ******************************************************* >2018-06-22 09:06:54,490 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:54,505 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:55,049 p=21516 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "903397bbd82e9b1f53087e3d7e8975d851857ce2", "msg": "Destination directory /var/log/panko does not exist"} >2018-06-22 09:06:55,049 p=21516 u=mistral | ...ignoring >2018-06-22 09:06:55,069 p=21516 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 09:06:55,118 p=21516 u=mistral | skipping: [compute-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:55,121 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:55,132 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:55,138 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:55,415 p=21516 u=mistral | ok: [controller-0] => (item=/var/lib/rabbitmq) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/lib/rabbitmq", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:55,721 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/log/containers/rabbitmq", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:55,745 p=21516 u=mistral | TASK [rabbitmq logs readme] **************************************************** >2018-06-22 09:06:55,797 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:55,812 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:56,337 p=21516 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ee241f2199f264c9d0f384cf389fe255e8bf8a77", "msg": "Destination directory /var/log/rabbitmq does not exist"} >2018-06-22 09:06:56,338 p=21516 u=mistral | ...ignoring >2018-06-22 09:06:56,359 p=21516 u=mistral | TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] *** >2018-06-22 09:06:56,413 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:56,425 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:56,723 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "echo 'export ERL_EPMD_ADDRESS=127.0.0.1' > /etc/rabbitmq/rabbitmq-env.conf\n echo 'export ERL_EPMD_PORT=4370' >> /etc/rabbitmq/rabbitmq-env.conf\n for pid in $(pgrep epmd --ns 1 --nslist pid); do kill $pid; done", "delta": "0:00:00.021456", "end": "2018-06-22 09:06:56.734593", "rc": 0, "start": "2018-06-22 09:06:56.713137", "stderr": "/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory\n/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "stderr_lines": ["/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory"], "stdout": "", "stdout_lines": []} >2018-06-22 09:06:56,745 p=21516 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 09:06:56,803 p=21516 u=mistral | skipping: [compute-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:56,804 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:56,813 p=21516 u=mistral | skipping: [compute-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:56,827 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:56,827 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:56,831 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:57,126 p=21516 u=mistral | ok: [controller-0] => (item=/var/lib/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/lib/redis", "mode": "0750", "owner": "redis", "path": "/var/lib/redis", "secontext": "system_u:object_r:redis_var_lib_t:s0", "size": 6, "state": "directory", "uid": 992} >2018-06-22 09:06:57,447 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers/redis) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/redis", "mode": "0755", "owner": "root", "path": "/var/log/containers/redis", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:57,772 p=21516 u=mistral | ok: [controller-0] => (item=/var/run/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/run/redis", "mode": "0755", "owner": "redis", "path": "/var/run/redis", "secontext": "system_u:object_r:redis_var_run_t:s0", "size": 40, "state": "directory", "uid": 992} >2018-06-22 09:06:57,794 p=21516 u=mistral | TASK [redis logs readme] ******************************************************* >2018-06-22 09:06:57,845 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:57,857 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:58,360 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "42d03af8abf93e87fdb3fc69702638fc81d943fb", "dest": "/var/log/redis/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/redis/readme.txt", "secontext": "system_u:object_r:redis_log_t:s0", "size": 78, "state": "file", "uid": 0} >2018-06-22 09:06:58,382 p=21516 u=mistral | TASK [create /var/lib/sahara] ************************************************** >2018-06-22 09:06:58,429 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:58,441 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:58,740 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/sahara", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:58,765 p=21516 u=mistral | TASK [create persistent sahara logs directory] ********************************* >2018-06-22 09:06:58,815 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:58,829 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:59,111 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/sahara", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:06:59,133 p=21516 u=mistral | TASK [sahara logs readme] ****************************************************** >2018-06-22 09:06:59,180 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:59,193 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:06:59,718 p=21516 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b0212a1177fa4a88502d17a1cbc31198040cf047", "msg": "Destination directory /var/log/sahara does not exist"} >2018-06-22 09:06:59,718 p=21516 u=mistral | ...ignoring >2018-06-22 09:06:59,744 p=21516 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 09:06:59,797 p=21516 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:59,798 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:59,812 p=21516 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-22 09:06:59,818 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:00,088 p=21516 u=mistral | ok: [controller-0] => (item=/srv/node) => {"changed": false, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 16, "state": "directory", "uid": 0} >2018-06-22 09:07:00,388 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/swift) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 24, "state": "directory", "uid": 0} >2018-06-22 09:07:00,411 p=21516 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-06-22 09:07:00,458 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:00,474 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:00,749 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "dest": "/var/log/containers/swift", "gid": 0, "group": "root", "mode": "0777", "owner": "root", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 14, "src": "/var/log/swift", "state": "link", "uid": 0} >2018-06-22 09:07:00,772 p=21516 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 09:07:00,825 p=21516 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:00,827 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:00,828 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:00,846 p=21516 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:00,853 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:00,856 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:01,114 p=21516 u=mistral | ok: [controller-0] => (item=/srv/node) => {"changed": false, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 16, "state": "directory", "uid": 0} >2018-06-22 09:07:01,419 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/swift) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 24, "state": "directory", "uid": 0} >2018-06-22 09:07:01,733 p=21516 u=mistral | ok: [controller-0] => (item=/var/log/containers) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers", "mode": "0755", "owner": "root", "path": "/var/log/containers", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 261, "state": "directory", "uid": 0} >2018-06-22 09:07:01,755 p=21516 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-06-22 09:07:01,802 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:01,803 p=21516 u=mistral | ok: [controller-0] => {"ansible_facts": {"swift_use_local_disks": true}, "changed": false} >2018-06-22 09:07:01,812 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:01,833 p=21516 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-06-22 09:07:01,882 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:01,895 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:02,168 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/srv/node/d1", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:02,190 p=21516 u=mistral | TASK [swift logs readme] ******************************************************* >2018-06-22 09:07:02,237 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:02,250 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:02,729 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "checksum": "42510a6de124722d6efbc2b1bb038bfe97e5b6d3", "dest": "/var/log/swift/readme.txt", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/var/log/swift/readme.txt", "secontext": "system_u:object_r:var_log_t:s0", "size": 116, "state": "file", "uid": 0} >2018-06-22 09:07:02,752 p=21516 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-06-22 09:07:02,834 p=21516 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-06-22 09:07:02,907 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:07:02,932 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:02,968 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:03,256 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:03,278 p=21516 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-06-22 09:07:03,303 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:03,341 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:03,937 p=21516 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-06-22 09:07:03,938 p=21516 u=mistral | ...ignoring >2018-06-22 09:07:03,959 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:07:03,987 p=21516 u=mistral | skipping: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:04,025 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:04,351 p=21516 u=mistral | ok: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:04,373 p=21516 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-06-22 09:07:04,398 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:04,432 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:05,025 p=21516 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-06-22 09:07:05,025 p=21516 u=mistral | ...ignoring >2018-06-22 09:07:05,046 p=21516 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-06-22 09:07:05,072 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:05,107 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:05,398 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"atime": 1529672747.6798565, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1529433183.0936344, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 5335882, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "18446744072695807771", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-06-22 09:07:05,420 p=21516 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-06-22 09:07:05,445 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:05,481 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:05,783 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "-.slice sysinit.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Backlog": "128", "Before": "shutdown.target iscsid.service sockets.target", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22967", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-06-22 09:07:05,805 p=21516 u=mistral | TASK [create persistent logs directory] **************************************** >2018-06-22 09:07:05,832 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:05,866 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:06,150 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:06,170 p=21516 u=mistral | TASK [nova logs readme] ******************************************************** >2018-06-22 09:07:06,195 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:06,229 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:06,773 p=21516 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-06-22 09:07:06,773 p=21516 u=mistral | ...ignoring >2018-06-22 09:07:06,794 p=21516 u=mistral | TASK [Mount Nova NFS Share] **************************************************** >2018-06-22 09:07:06,822 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:06,846 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:06,856 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:06,875 p=21516 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 09:07:06,899 p=21516 u=mistral | skipping: [controller-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:06,899 p=21516 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:06,940 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:06,941 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:07,218 p=21516 u=mistral | ok: [compute-0] => (item=/var/lib/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/nova", "mode": "0755", "owner": "root", "path": "/var/lib/nova", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:07,511 p=21516 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-06-22 09:07:07,532 p=21516 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-06-22 09:07:07,557 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:07,590 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:07,898 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:07,922 p=21516 u=mistral | TASK [is Instance HA enabled] ************************************************** >2018-06-22 09:07:07,949 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:07,982 p=21516 u=mistral | ok: [compute-0] => {"ansible_facts": {"instance_ha_enabled": false}, "changed": false} >2018-06-22 09:07:07,983 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,003 p=21516 u=mistral | TASK [prepare Instance HA script directory] ************************************ >2018-06-22 09:07:08,026 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,047 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,057 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,076 p=21516 u=mistral | TASK [install Instance HA script that runs nova-compute] *********************** >2018-06-22 09:07:08,099 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,120 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,131 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,150 p=21516 u=mistral | TASK [Get list of instance HA compute nodes] *********************************** >2018-06-22 09:07:08,175 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,197 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,208 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,228 p=21516 u=mistral | TASK [If instance HA is enabled on the node activate the evacuation completed check] *** >2018-06-22 09:07:08,252 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,273 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,284 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,303 p=21516 u=mistral | TASK [create libvirt persistent data directories] ****************************** >2018-06-22 09:07:08,329 p=21516 u=mistral | skipping: [controller-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,352 p=21516 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,353 p=21516 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,353 p=21516 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,354 p=21516 u=mistral | skipping: [controller-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,365 p=21516 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,370 p=21516 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,375 p=21516 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,381 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,385 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:08,653 p=21516 u=mistral | ok: [compute-0] => (item=/etc/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt", "mode": "0700", "owner": "root", "path": "/etc/libvirt", "secontext": "system_u:object_r:virt_etc_t:s0", "size": 215, "state": "directory", "uid": 0} >2018-06-22 09:07:08,948 p=21516 u=mistral | ok: [compute-0] => (item=/etc/libvirt/secrets) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/secrets", "mode": "0700", "owner": "root", "path": "/etc/libvirt/secrets", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:09,239 p=21516 u=mistral | ok: [compute-0] => (item=/etc/libvirt/qemu) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/qemu", "mode": "0700", "owner": "root", "path": "/etc/libvirt/qemu", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 22, "state": "directory", "uid": 0} >2018-06-22 09:07:09,535 p=21516 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-06-22 09:07:09,828 p=21516 u=mistral | ok: [compute-0] => (item=/var/log/containers/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/libvirt", "mode": "0755", "owner": "root", "path": "/var/log/containers/libvirt", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:09,850 p=21516 u=mistral | TASK [ensure qemu group is present on the host] ******************************** >2018-06-22 09:07:09,877 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:09,912 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:10,298 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "gid": 107, "name": "qemu", "state": "present", "system": false} >2018-06-22 09:07:10,319 p=21516 u=mistral | TASK [ensure qemu user is present on the host] ********************************* >2018-06-22 09:07:10,344 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:10,378 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:10,869 p=21516 u=mistral | ok: [compute-0] => {"append": false, "changed": false, "comment": "qemu user", "group": 107, "home": "/", "move_home": false, "name": "qemu", "shell": "/sbin/nologin", "state": "present", "uid": 107} >2018-06-22 09:07:10,890 p=21516 u=mistral | TASK [create directory for vhost-user sockets with qemu ownership] ************* >2018-06-22 09:07:10,918 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:10,953 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:11,236 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "gid": 107, "group": "qemu", "mode": "0755", "owner": "qemu", "path": "/var/lib/vhost_sockets", "secontext": "system_u:object_r:virt_cache_t:s0", "size": 6, "state": "directory", "uid": 107} >2018-06-22 09:07:11,257 p=21516 u=mistral | TASK [check if libvirt is installed] ******************************************* >2018-06-22 09:07:11,287 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:11,328 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:11,636 p=21516 u=mistral | [WARNING]: Consider using the yum, dnf or zypper module rather than running >rpm. If you need to use command because yum, dnf or zypper is insufficient you >can add warn=False to this command task or set command_warnings=False in >ansible.cfg to get rid of this message. > >2018-06-22 09:07:11,636 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["/usr/bin/rpm", "-q", "libvirt-daemon"], "delta": "0:00:00.031733", "end": "2018-06-22 09:07:11.649593", "failed_when_result": false, "rc": 0, "start": "2018-06-22 09:07:11.617860", "stderr": "", "stderr_lines": [], "stdout": "libvirt-daemon-3.9.0-14.el7_5.5.x86_64", "stdout_lines": ["libvirt-daemon-3.9.0-14.el7_5.5.x86_64"]} >2018-06-22 09:07:11,657 p=21516 u=mistral | TASK [make sure libvirt services are disabled] ********************************* >2018-06-22 09:07:11,684 p=21516 u=mistral | skipping: [controller-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:11,685 p=21516 u=mistral | skipping: [controller-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:11,726 p=21516 u=mistral | skipping: [ceph-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:11,728 p=21516 u=mistral | skipping: [ceph-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,027 p=21516 u=mistral | ok: [compute-0] => (item=libvirtd.service) => {"changed": false, "enabled": false, "item": "libvirtd.service", "name": "libvirtd.service", "state": "stopped", "status": {"ActiveEnterTimestamp": "Fri 2018-06-22 06:53:50 EDT", "ActiveEnterTimestampMonotonic": "34505603", "ActiveExitTimestamp": "Fri 2018-06-22 09:05:51 EDT", "ActiveExitTimestampMonotonic": "7955092692", "ActiveState": "inactive", "After": "network.target local-fs.target virtlogd.socket virtlockd.socket basic.target virtlockd.service iscsid.service virtlogd.service dbus.service system.slice apparmor.service remote-fs.target systemd-journald.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-06-22 06:53:50 EDT", "AssertTimestampMonotonic": "34361929", "Before": "shutdown.target libvirt-guests.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-06-22 06:53:50 EDT", "ConditionTimestampMonotonic": "34361929", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Virtualization daemon", "DevicePolicy": "auto", "Documentation": "man:libvirtd(8) https://libvirt.org", "EnvironmentFile": "/etc/sysconfig/libvirtd (ignore_errors=yes)", "ExecMainCode": "1", "ExecMainExitTimestamp": "Fri 2018-06-22 09:05:51 EDT", "ExecMainExitTimestampMonotonic": "7955101999", "ExecMainPID": "1181", "ExecMainStartTimestamp": "Fri 2018-06-22 06:53:50 EDT", "ExecMainStartTimestampMonotonic": "34365478", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/libvirtd ; argv[]=/usr/sbin/libvirtd $LIBVIRTD_ARGS ; ignore_errors=no ; start_time=[Fri 2018-06-22 06:53:50 EDT] ; stop_time=[Fri 2018-06-22 09:05:51 EDT] ; pid=1181 ; code=exited ; status=0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/libvirtd.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "libvirtd.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Fri 2018-06-22 09:05:51 EDT", "InactiveEnterTimestampMonotonic": "7955102088", "InactiveExitTimestamp": "Fri 2018-06-22 06:53:50 EDT", "InactiveExitTimestampMonotonic": "34365528", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "8192", "LimitNPROC": "22967", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "libvirtd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "basic.target virtlogd.socket virtlockd.socket", "Restart": "on-failure", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "32768", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "disabled", "WantedBy": "libvirt-guests.service", "Wants": "system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-22 09:07:12,338 p=21516 u=mistral | ok: [compute-0] => (item=virtlogd.socket) => {"changed": false, "enabled": false, "item": "virtlogd.socket", "name": "virtlogd.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Fri 2018-06-22 06:53:19 EDT", "ActiveEnterTimestampMonotonic": "2905266", "ActiveExitTimestamp": "Fri 2018-06-22 09:05:51 EDT", "ActiveExitTimestampMonotonic": "7955274500", "ActiveState": "inactive", "After": "-.slice sysinit.target -.mount", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-06-22 06:53:19 EDT", "AssertTimestampMonotonic": "2904494", "Backlog": "128", "Before": "virtlogd.service shutdown.target sockets.target libvirtd.service", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-06-22 06:53:19 EDT", "ConditionTimestampMonotonic": "2904494", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Virtual machine log manager socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "FragmentPath": "/usr/lib/systemd/system/virtlogd.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "virtlogd.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Fri 2018-06-22 09:05:51 EDT", "InactiveEnterTimestampMonotonic": "7955274500", "InactiveExitTimestamp": "Fri 2018-06-22 06:53:19 EDT", "InactiveExitTimestampMonotonic": "2905266", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22967", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "ListenStream": "/var/run/libvirt/virtlogd-sock", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "virtlogd.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "RequiredBy": "virtlogd.service libvirtd.service", "Requires": "sysinit.target -.mount", "RequiresMountsFor": "/var/run/libvirt/virtlogd-sock", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "virtlogd.service", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-06-22 09:07:12,365 p=21516 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 09:07:12,398 p=21516 u=mistral | skipping: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,399 p=21516 u=mistral | skipping: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,430 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,431 p=21516 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,446 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,453 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,476 p=21516 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-06-22 09:07:12,504 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,529 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,540 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,561 p=21516 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-06-22 09:07:12,586 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,608 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,619 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,639 p=21516 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-06-22 09:07:12,663 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,683 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,693 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,712 p=21516 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-06-22 09:07:12,735 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,758 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,768 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,788 p=21516 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-06-22 09:07:12,812 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,835 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,847 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,869 p=21516 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-06-22 09:07:12,895 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,917 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,928 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:12,952 p=21516 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-06-22 09:07:12,978 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,000 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,016 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,039 p=21516 u=mistral | TASK [create persistent directories] ******************************************* >2018-06-22 09:07:13,090 p=21516 u=mistral | skipping: [controller-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,090 p=21516 u=mistral | skipping: [controller-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,091 p=21516 u=mistral | skipping: [controller-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,092 p=21516 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,097 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,098 p=21516 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,104 p=21516 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,108 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,113 p=21516 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,134 p=21516 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-06-22 09:07:13,158 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,184 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,194 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,214 p=21516 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-06-22 09:07:13,239 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,262 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,274 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,293 p=21516 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-06-22 09:07:13,321 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,374 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,386 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,406 p=21516 u=mistral | TASK [swift logs readme] ******************************************************* >2018-06-22 09:07:13,432 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,454 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,464 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:13,485 p=21516 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-06-22 09:07:13,559 p=21516 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-06-22 09:07:13,623 p=21516 u=mistral | PLAY [External deployment step 1] ********************************************** >2018-06-22 09:07:13,645 p=21516 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-06-22 09:07:13,671 p=21516 u=mistral | ok: [undercloud] => {"ansible_facts": {"blacklisted_hostnames": []}, "changed": false} >2018-06-22 09:07:13,687 p=21516 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-06-22 09:07:13,892 p=21516 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-06-22 09:07:14,048 p=21516 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/host_vars) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/host_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/host_vars", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-06-22 09:07:14,211 p=21516 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir) => {"changed": true, "gid": 985, "group": "mistral", "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 988} >2018-06-22 09:07:14,229 p=21516 u=mistral | TASK [generate inventory] ****************************************************** >2018-06-22 09:07:14,802 p=21516 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "87ac4959715a33a06028c69b6c3ea4a5d7293cae", "dest": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/inventory.yml", "gid": 985, "group": "mistral", "md5sum": "979b46b7bc4f15cc49e1ab2540ac09dc", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 525, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529672834.5-176500071929181/source", "state": "file", "uid": 988} >2018-06-22 09:07:14,818 p=21516 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-06-22 09:07:14,854 p=21516 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_all": {"ceph_conf_overrides": {"global": {"osd_pool_default_pg_num": 32, "osd_pool_default_pgp_num": 32, "osd_pool_default_size": 1, "rgw_keystone_accepted_roles": "Member, admin", "rgw_keystone_admin_domain": "default", "rgw_keystone_admin_password": "r4vvqGIopZIGavHfqwBD5EZm2", "rgw_keystone_admin_project": "service", "rgw_keystone_admin_user": "swift", "rgw_keystone_api_version": 3, "rgw_keystone_implicit_tenants": "true", "rgw_keystone_revocation_interval": "0", "rgw_keystone_url": "http://172.17.1.17:5000", "rgw_s3_auth_use_keystone": "true"}}, "ceph_docker_image": "rhceph", "ceph_docker_image_tag": "3-6", "ceph_docker_registry": "192.168.24.1:8787", "ceph_origin": "distro", "ceph_stable": true, "cluster": "ceph", "cluster_network": "172.17.4.0/24", "containerized_deployment": true, "docker": true, "fsid": "53912472-747b-11e8-95a3-5254003d7dcb", "generate_fsid": false, "ip_version": "ipv4", "keys": [{"key": "AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r", "name": "client.openstack", "osd_cap": "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics"}, {"key": "AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==", "mds_cap": "allow *", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r, allow command \\\"auth del\\\", allow command \\\"auth caps\\\", allow command \\\"auth get\\\", allow command \\\"auth get-or-create\\\"", "name": "client.manila", "osd_cap": "allow rw"}, {"key": "AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow rw", "name": "client.radosgw", "osd_cap": "allow rwx"}], "monitor_address_block": "172.17.3.0/24", "ntp_service_enabled": false, "openstack_config": true, "openstack_keys": [{"key": "AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r", "name": "client.openstack", "osd_cap": "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics"}, {"key": "AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==", "mds_cap": "allow *", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow r, allow command \\\"auth del\\\", allow command \\\"auth caps\\\", allow command \\\"auth get\\\", allow command \\\"auth get-or-create\\\"", "name": "client.manila", "osd_cap": "allow rw"}, {"key": "AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==", "mgr_cap": "allow *", "mode": "0600", "mon_cap": "allow rw", "name": "client.radosgw", "osd_cap": "allow rwx"}], "openstack_pools": [{"application": "rbd", "name": "images", "pg_num": 32, "rule_name": ""}, {"application": "openstack_gnocchi", "name": "metrics", "pg_num": 32, "rule_name": ""}, {"application": "rbd", "name": "backups", "pg_num": 32, "rule_name": ""}, {"application": "rbd", "name": "vms", "pg_num": 32, "rule_name": ""}, {"application": "rbd", "name": "volumes", "pg_num": 32, "rule_name": ""}], "pools": [], "public_network": "172.17.3.0/24", "user_config": true}}, "changed": false} >2018-06-22 09:07:14,873 p=21516 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-06-22 09:07:15,198 p=21516 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "2ef1c16fef5f2acadbb7d229126152ecda226303", "dest": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars/all.yml", "gid": 985, "group": "mistral", "md5sum": "253bfbf148fef2712fbcc2a2f29c2d8a", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 3030, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529672834.91-69448758155289/source", "state": "file", "uid": 988} >2018-06-22 09:07:15,214 p=21516 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-06-22 09:07:15,242 p=21516 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_extra_vars": {"fetch_directory": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir", "ireallymeanit": "yes"}}, "changed": false} >2018-06-22 09:07:15,257 p=21516 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-06-22 09:07:15,577 p=21516 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "7decc9b4b57fbf03b512c50bd65709ec248d4054", "dest": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/extra_vars.yml", "gid": 985, "group": "mistral", "md5sum": "51b958c187e1082154a223f6a3d44402", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 115, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529672835.28-178128494702636/source", "state": "file", "uid": 988} >2018-06-22 09:07:15,593 p=21516 u=mistral | TASK [generate collect nodes uuid playbook] ************************************ >2018-06-22 09:07:15,899 p=21516 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "0ed9243967d775f1d706f954c81c53dbea91f151", "dest": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/nodes_uuid_playbook.yml", "gid": 985, "group": "mistral", "md5sum": "afa7e006582a1713f57c3de7724c9f39", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 157, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529672835.62-176426672710583/source", "state": "file", "uid": 988} >2018-06-22 09:07:15,915 p=21516 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-06-22 09:07:15,931 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:15,947 p=21516 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-06-22 09:07:15,967 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:15,982 p=21516 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-06-22 09:07:16,001 p=21516 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-06-22 09:07:16,016 p=21516 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-06-22 09:07:16,044 p=21516 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mgrs": {"ceph_mgr_docker_extra_env": "-e MGR_DASHBOARD=0"}}, "changed": false} >2018-06-22 09:07:16,059 p=21516 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-06-22 09:07:16,378 p=21516 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "06d130f3471f2ac09bb0161450878cf64bafd8af", "dest": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars/mgrs.yml", "gid": 985, "group": "mistral", "md5sum": "0d3c03a4186ad82120a728e0470a49d9", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 46, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529672836.08-145579126116327/source", "state": "file", "uid": 988} >2018-06-22 09:07:16,395 p=21516 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-06-22 09:07:16,422 p=21516 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mons": {"admin_secret": "AQB2NypbAAAAABAADYq0x/U/g/5X5IHsGSXANQ==", "monitor_secret": "AQB2NypbAAAAABAA67vSeiofLzzYgrjDnmeGYg=="}}, "changed": false} >2018-06-22 09:07:16,437 p=21516 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-06-22 09:07:16,754 p=21516 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "719e0f5af2a6bb3f7c520087bffa8e6653fc9cbd", "dest": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars/mons.yml", "gid": 985, "group": "mistral", "md5sum": "6826ff7a84879618ddc5f5704567757d", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 112, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529672836.47-193696256941933/source", "state": "file", "uid": 988} >2018-06-22 09:07:16,770 p=21516 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-06-22 09:07:16,800 p=21516 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_clients": {}}, "changed": false} >2018-06-22 09:07:16,817 p=21516 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-06-22 09:07:17,135 p=21516 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars/clients.yml", "gid": 985, "group": "mistral", "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529672836.85-206150309301266/source", "state": "file", "uid": 988} >2018-06-22 09:07:17,151 p=21516 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-06-22 09:07:17,179 p=21516 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_osds": {"devices": ["/dev/vdb"], "journal_size": 512, "osd_objectstore": "filestore", "osd_scenario": "collocated"}}, "changed": false} >2018-06-22 09:07:17,194 p=21516 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-06-22 09:07:17,515 p=21516 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "454c7fd1ab87fd8f8ec07c9874039814cbe681cf", "dest": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars/osds.yml", "gid": 985, "group": "mistral", "md5sum": "e03a30f138554d36c1743c14fd3d9357", "mode": "0644", "owner": "mistral", "secontext": "system_u:object_r:var_lib_t:s0", "size": 90, "src": "/home/mistral/.ansible/tmp/ansible-tmp-1529672837.22-4324097018262/source", "state": "file", "uid": 988} >2018-06-22 09:07:17,520 p=21516 u=mistral | PLAY [Overcloud deploy step tasks for 1] *************************************** >2018-06-22 09:07:17,543 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:07:17,589 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:17,602 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:17,666 p=21516 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-06-22 09:07:18,112 p=21516 u=mistral | changed: [controller-0] => {"changed": true} >2018-06-22 09:07:18,134 p=21516 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-06-22 09:07:18,770 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-63.git94f4240.el7.x86_64 providing docker is already installed"]} >2018-06-22 09:07:18,792 p=21516 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-06-22 09:07:19,136 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:19,157 p=21516 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-06-22 09:07:19,650 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-06-22 09:07:19,671 p=21516 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-06-22 09:07:20,160 p=21516 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 09:07:20,181 p=21516 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-06-22 09:07:20,528 p=21516 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-06-22 09:07:20,549 p=21516 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-06-22 09:07:20,894 p=21516 u=mistral | changed: [controller-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:20,921 p=21516 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-06-22 09:07:21,551 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672840.96-116500892533679/source", "state": "file", "uid": 0} >2018-06-22 09:07:21,574 p=21516 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-06-22 09:07:21,925 p=21516 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 09:07:21,946 p=21516 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-06-22 09:07:22,287 p=21516 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 09:07:22,308 p=21516 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-06-22 09:07:22,654 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-06-22 09:07:22,677 p=21516 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-06-22 09:07:22,697 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:22,718 p=21516 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-06-22 09:07:23,123 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "name": null, "status": {}} >2018-06-22 09:07:23,146 p=21516 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-06-22 09:07:24,862 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "registries.service systemd-journald.socket basic.target docker-storage-setup.service system.slice rhel-push-plugin.socket network.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127793", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "registries.service basic.target rhel-push-plugin.socket docker-cleanup.timer", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-22 09:07:24,887 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:07:24,915 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:24,952 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:24,992 p=21516 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-06-22 09:07:25,385 p=21516 u=mistral | changed: [compute-0] => {"changed": true} >2018-06-22 09:07:25,404 p=21516 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-06-22 09:07:26,061 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-63.git94f4240.el7.x86_64 providing docker is already installed"]} >2018-06-22 09:07:26,080 p=21516 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-06-22 09:07:26,461 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:26,512 p=21516 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-06-22 09:07:26,855 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-06-22 09:07:26,871 p=21516 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-06-22 09:07:27,213 p=21516 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 09:07:27,233 p=21516 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-06-22 09:07:27,576 p=21516 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-06-22 09:07:27,596 p=21516 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-06-22 09:07:27,940 p=21516 u=mistral | changed: [compute-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:27,963 p=21516 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-06-22 09:07:28,572 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672848.01-90318455063944/source", "state": "file", "uid": 0} >2018-06-22 09:07:28,590 p=21516 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-06-22 09:07:28,937 p=21516 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 09:07:28,954 p=21516 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-06-22 09:07:29,292 p=21516 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 09:07:29,309 p=21516 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-06-22 09:07:29,655 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-06-22 09:07:29,673 p=21516 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-06-22 09:07:29,695 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:29,712 p=21516 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-06-22 09:07:30,106 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "name": null, "status": {}} >2018-06-22 09:07:30,125 p=21516 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-06-22 09:07:31,797 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "systemd-journald.socket basic.target docker-storage-setup.service registries.service system.slice network.target rhel-push-plugin.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "registries.service rhel-push-plugin.socket basic.target docker-cleanup.timer", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-22 09:07:31,818 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:07:31,845 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:31,867 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:31,878 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:31,898 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:07:31,924 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:31,946 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:31,958 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:31,978 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:07:32,004 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:32,025 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:32,078 p=21516 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-06-22 09:07:32,383 p=21516 u=mistral | changed: [ceph-0] => {"changed": true} >2018-06-22 09:07:32,402 p=21516 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-06-22 09:07:32,974 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-63.git94f4240.el7.x86_64 providing docker is already installed"]} >2018-06-22 09:07:32,992 p=21516 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-06-22 09:07:33,305 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:33,323 p=21516 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-06-22 09:07:33,651 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-06-22 09:07:33,667 p=21516 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-06-22 09:07:33,999 p=21516 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 09:07:34,015 p=21516 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-06-22 09:07:34,339 p=21516 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-06-22 09:07:34,357 p=21516 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-06-22 09:07:34,690 p=21516 u=mistral | changed: [ceph-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:34,717 p=21516 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-06-22 09:07:35,312 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672854.76-91757793962520/source", "state": "file", "uid": 0} >2018-06-22 09:07:35,328 p=21516 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-06-22 09:07:35,652 p=21516 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 09:07:35,668 p=21516 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-06-22 09:07:36,000 p=21516 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-06-22 09:07:36,016 p=21516 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-06-22 09:07:36,342 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-06-22 09:07:36,359 p=21516 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-06-22 09:07:36,380 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:07:36,397 p=21516 u=mistral | TASK [container-registry : force systemd to reread configs] ******************** >2018-06-22 09:07:36,793 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "name": null, "status": {}} >2018-06-22 09:07:36,810 p=21516 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-06-22 09:07:38,540 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "network.target docker-storage-setup.service system.slice registries.service rhel-push-plugin.socket basic.target systemd-journald.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "14904", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target rhel-push-plugin.socket registries.service docker-cleanup.timer", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-06-22 09:07:38,542 p=21516 u=mistral | RUNNING HANDLER [container-registry : restart docker] ************************** >2018-06-22 09:07:41,233 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Fri 2018-06-22 09:07:31 EDT", "ActiveEnterTimestampMonotonic": "8055335900", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "systemd-journald.socket registries.service basic.target docker-storage-setup.service network.target rhel-push-plugin.socket system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-06-22 09:07:30 EDT", "AssertTimestampMonotonic": "8054173633", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-06-22 09:07:30 EDT", "ConditionTimestampMonotonic": "8054173632", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "26909", "ExecMainStartTimestamp": "Fri 2018-06-22 09:07:30 EDT", "ExecMainStartTimestampMonotonic": "8054174710", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Fri 2018-06-22 09:07:30 EDT] ; stop_time=[n/a] ; pid=26909 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-06-22 09:07:30 EDT", "InactiveExitTimestampMonotonic": "8054174744", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22967", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "26909", "MemoryAccounting": "no", "MemoryCurrent": "61542400", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "registries.service docker-cleanup.timer basic.target rhel-push-plugin.socket", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "20", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestamp": "Fri 2018-06-22 09:07:31 EDT", "WatchdogTimestampMonotonic": "8055335675", "WatchdogUSec": "0"}} >2018-06-22 09:07:41,237 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Fri 2018-06-22 09:07:24 EDT", "ActiveEnterTimestampMonotonic": "8047294243", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "docker-storage-setup.service registries.service system.slice systemd-journald.socket rhel-push-plugin.socket basic.target network.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-06-22 09:07:23 EDT", "AssertTimestampMonotonic": "8046126804", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-06-22 09:07:23 EDT", "ConditionTimestampMonotonic": "8046126804", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "17219", "ExecMainStartTimestamp": "Fri 2018-06-22 09:07:23 EDT", "ExecMainStartTimestampMonotonic": "8046128084", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Fri 2018-06-22 09:07:23 EDT] ; stop_time=[n/a] ; pid=17219 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-06-22 09:07:23 EDT", "InactiveExitTimestampMonotonic": "8046128119", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127793", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "17219", "MemoryAccounting": "no", "MemoryCurrent": "66334720", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer rhel-push-plugin.socket basic.target registries.service", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "25", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "system.slice docker-storage-setup.service", "WatchdogTimestamp": "Fri 2018-06-22 09:07:24 EDT", "WatchdogTimestampMonotonic": "8047294191", "WatchdogUSec": "0"}} >2018-06-22 09:07:41,270 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Fri 2018-06-22 09:07:38 EDT", "ActiveEnterTimestampMonotonic": "8062066049", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "registries.service docker-storage-setup.service rhel-push-plugin.socket network.target systemd-journald.socket system.slice basic.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Fri 2018-06-22 09:07:37 EDT", "AssertTimestampMonotonic": "8060867662", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Fri 2018-06-22 09:07:37 EDT", "ConditionTimestampMonotonic": "8060867646", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "24157", "ExecMainStartTimestamp": "Fri 2018-06-22 09:07:37 EDT", "ExecMainStartTimestampMonotonic": "8060869032", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Fri 2018-06-22 09:07:37 EDT] ; stop_time=[n/a] ; pid=24157 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Fri 2018-06-22 09:07:37 EDT", "InactiveExitTimestampMonotonic": "8060869063", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "14904", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "24157", "MemoryAccounting": "no", "MemoryCurrent": "63479808", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "docker-cleanup.timer rhel-push-plugin.socket registries.service basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "16", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestamp": "Fri 2018-06-22 09:07:38 EDT", "WatchdogTimestampMonotonic": "8062065992", "WatchdogUSec": "0"}} >2018-06-22 09:07:41,277 p=21516 u=mistral | PLAY [Overcloud common deploy step tasks 1] ************************************ >2018-06-22 09:07:41,303 p=21516 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-06-22 09:07:41,789 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:41,795 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:41,814 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:41,835 p=21516 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-06-22 09:07:42,547 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "149113e83b0cb4d05192576bcff7b6fc0f316bd0", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "66bedc7c4ccee7cb079b118c09f8c08c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1630, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672861.88-147117412802624/source", "state": "file", "uid": 0} >2018-06-22 09:07:42,563 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "f8a32eb42203ada5e675fbde141df7f32100af5c", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "c727dc3c35ede89e7c3d894e3fb81da4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1588, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672861.93-80413339300445/source", "state": "file", "uid": 0} >2018-06-22 09:07:42,572 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "be3cadf4421fbe374d33f269513ff6e3f1c7ab66", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "86461fb932aeaba90516617c8168d5f2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1576, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672861.9-127046555054160/source", "state": "file", "uid": 0} >2018-06-22 09:07:42,595 p=21516 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-06-22 09:07:42,992 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-06-22 09:07:43,020 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-06-22 09:07:43,025 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-06-22 09:07:43,046 p=21516 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-06-22 09:07:43,731 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "c8d0c143121b7904490da6698d68f76bf1957b51", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "c6d9b1246ac65ebadc18213639c2431d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 234, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672863.13-140408582389277/source", "state": "file", "uid": 0} >2018-06-22 09:07:43,784 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "c5bc7cf017025a018ebda9dd2ad6aac290a51bef", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "b53dfdbc008416d050550640e4219f21", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 13304, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672863.13-160002347621070/source", "state": "file", "uid": 0} >2018-06-22 09:07:43,792 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "09cb610f7fea36dc33be3297b42ac38af987732e", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "e806efb887de6e5795dea0490c302e84", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2288, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672863.12-270124186599648/source", "state": "file", "uid": 0} >2018-06-22 09:07:43,814 p=21516 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-06-22 09:07:44,239 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:44,242 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:44,267 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:44,289 p=21516 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-06-22 09:07:44,663 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-06-22 09:07:44,719 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-06-22 09:07:44,720 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-06-22 09:07:44,744 p=21516 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-06-22 09:07:45,449 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': 'nova_api_discover_hosts.sh'}) => {"changed": true, "checksum": "4e350e3d48cba294f2ccab34eb03c1dee23e7f82", "dest": "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh", "gid": 0, "group": "root", "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "md5sum": "ed5dca102b28b4f992943612dee2dced", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672864.83-87234014985071/source", "state": "file", "uid": 0} >2018-06-22 09:07:45,450 p=21516 u=mistral | changed: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": true, "checksum": "03f62b0a94bee17ece72ba1a3fc7577e68d9e6a4", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "md5sum": "1672c3fb89d576d045d5f3d5b23684c9", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672864.82-15062458672186/source", "state": "file", "uid": 0} >2018-06-22 09:07:46,057 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': 'create_swift_secret.sh'}) => {"changed": true, "checksum": "e77b96beec241bb84928d298a778521376225c0d", "dest": "/var/lib/docker-config-scripts/create_swift_secret.sh", "gid": 0, "group": "root", "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "md5sum": "9277d70c2fd62961998c5fce0a8aeee2", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1125, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672865.47-42869699125008/source", "state": "file", "uid": 0} >2018-06-22 09:07:46,657 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': 'neutron_ovs_agent_launcher.sh'}) => {"changed": true, "checksum": "03f62b0a94bee17ece72ba1a3fc7577e68d9e6a4", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "md5sum": "1672c3fb89d576d045d5f3d5b23684c9", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672866.08-133953580085008/source", "state": "file", "uid": 0} >2018-06-22 09:07:47,252 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': 'set_swift_keymaster_key_id.sh'}) => {"changed": true, "checksum": "9c2474fa6e4a8869674b689206eb1a1658a28fc6", "dest": "/var/lib/docker-config-scripts/set_swift_keymaster_key_id.sh", "gid": 0, "group": "root", "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "md5sum": "054225f8957e4457ef2103ce24d44b04", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1275, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672866.68-89425987515735/source", "state": "file", "uid": 0} >2018-06-22 09:07:47,840 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': 'docker_puppet_apply.sh'}) => {"changed": true, "checksum": "93afaa6df42c9ead7768b295fa901f83ae1b39ef", "dest": "/var/lib/docker-config-scripts/docker_puppet_apply.sh", "gid": 0, "group": "root", "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "md5sum": "709b2caef95cc7486f9b851414e71133", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 653, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672867.28-73399063639649/source", "state": "file", "uid": 0} >2018-06-22 09:07:48,441 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': 'nova_api_ensure_default_cell.sh'}) => {"changed": true, "checksum": "0a839197c2fa15204014befb1c771a17aea5bdd1", "dest": "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh", "gid": 0, "group": "root", "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "md5sum": "12a4a82656ddaae342942097b752d9db", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 442, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672867.87-131122619903020/source", "state": "file", "uid": 0} >2018-06-22 09:07:48,467 p=21516 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-06-22 09:07:48,532 p=21516 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,533 p=21516 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,537 p=21516 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,544 p=21516 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,544 p=21516 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,548 p=21516 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,550 p=21516 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,568 p=21516 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,569 p=21516 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,569 p=21516 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,570 p=21516 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,572 p=21516 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,575 p=21516 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,576 p=21516 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,584 p=21516 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,593 p=21516 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,600 p=21516 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,609 p=21516 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,633 p=21516 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-06-22 09:07:48,735 p=21516 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:48,750 p=21516 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:49,201 p=21516 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:07:49,221 p=21516 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-06-22 09:07:49,936 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "ea8622945980cce2aa6f6a0ec285f28fef454eb3", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "6a2e3c98b99c4f234941b76485bb3f0e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 11909, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672869.3-124305901331435/source", "state": "file", "uid": 0} >2018-06-22 09:07:49,940 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "740e1e28c1c247d902ea93a5c5658cbb3b0f6a6b", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "03c25611f6536d6c60998df8d5135622", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 105573, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672869.27-78527406470496/source", "state": "file", "uid": 0} >2018-06-22 09:07:49,941 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "ce9bc1dccca0cdcaa3098c1a790d78a8c694a5a4", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "ccd9b33a462e8e1243e2dc1f30301019", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1055, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672869.31-92710433964223/source", "state": "file", "uid": 0} >2018-06-22 09:07:49,964 p=21516 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-06-22 09:07:50,657 p=21516 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672870.03-250268157019643/source", "state": "file", "uid": 0} >2018-06-22 09:07:50,680 p=21516 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672870.05-210898388839598/source", "state": "file", "uid": 0} >2018-06-22 09:07:50,707 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'memcached_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log'], 'user': u'root', 'volumes': [u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'detach': False, 'privileged': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=8omuhCCcfP1YuJzPZS8tLp3AL', u'DB_ROOT_PASSWORD=zeHIZe0ICg'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=n8jIt9appI3hU5NXoG3W'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": true, "checksum": "6ed04ef67fe6d8f97037e1cd69a5309ba391ac53", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "memcached_init_logs": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=8omuhCCcfP1YuJzPZS8tLp3AL", "DB_ROOT_PASSWORD=zeHIZe0ICg"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=n8jIt9appI3hU5NXoG3W"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "md5sum": "04ad0163fb197eeb581f7e65b7213dab", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 7434, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672870.05-205255616427415/source", "state": "file", "uid": 0} >2018-06-22 09:07:51,309 p=21516 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672870.69-249804714803662/source", "state": "file", "uid": 0} >2018-06-22 09:07:51,328 p=21516 u=mistral | changed: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_libvirt': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": true, "checksum": "7410b402d81937d9a195a3bf5e8207fa09cdb6e0", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "md5sum": "57cce5acf78ba9c384000a575f958249", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 5050, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672870.67-239673426855745/source", "state": "file", "uid": 0} >2018-06-22 09:07:51,380 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'6CLNy5Ewot5UhcBYmt27oGDMD'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": true, "checksum": "16f70a31b7af2c706e6f92cce58994006ac0aab9", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "6CLNy5Ewot5UhcBYmt27oGDMD"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "md5sum": "96751e80b3a4c2d2ff5e757c69bbd0f1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21820, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672870.71-280505621379279/source", "state": "file", "uid": 0} >2018-06-22 09:07:51,940 p=21516 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672871.32-174699288546945/source", "state": "file", "uid": 0} >2018-06-22 09:07:51,988 p=21516 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672871.33-229568354666234/source", "state": "file", "uid": 0} >2018-06-22 09:07:52,031 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": true, "checksum": "32641f02d8530e99d55d2507d2f8c4f55f7c84ee", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "md5sum": "1c1a67997e7c7aefb11423c0b154cfa3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 17318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672871.38-269510948339519/source", "state": "file", "uid": 0} >2018-06-22 09:07:52,561 p=21516 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672871.95-89474512679164/source", "state": "file", "uid": 0} >2018-06-22 09:07:52,646 p=21516 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672872.0-110266233028209/source", "state": "file", "uid": 0} >2018-06-22 09:07:52,694 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_statsd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'gnocchi_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}}, 'key': u'step_5'}) => {"changed": true, "checksum": "c2a7ac6b8005e79d78aef7c5ad151bc86910a864", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "md5sum": "c0b0bae79696f841ec4ed6b9ea7d192a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 10552, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672872.03-274032331853752/source", "state": "file", "uid": 0} >2018-06-22 09:07:53,174 p=21516 u=mistral | changed: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "8acd94aee3f5b5403e8fb7f16593594f245dafee", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "md5sum": "2aaa44b365bea28e18d96f2f17bef412", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 973, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672872.57-261603322779737/source", "state": "file", "uid": 0} >2018-06-22 09:07:53,307 p=21516 u=mistral | changed: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '53912472-747b-11e8-95a3-5254003d7dcb' --base64 'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "0d417e60cd9c4b580b8889ca2b34ab7a7cd1c84e", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '53912472-747b-11e8-95a3-5254003d7dcb' --base64 'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "md5sum": "43f4c7750111fb2e9d00b850149a8ce7", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6779, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672872.66-233433501446044/source", "state": "file", "uid": 0} >2018-06-22 09:07:53,394 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "a1be6aa2d4cc45e104b7c75319745196e636d5d2", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "md5sum": "1f138d32563935823e0ae333e7382fb3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 48375, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672872.7-270353374218873/source", "state": "file", "uid": 0} >2018-06-22 09:07:53,801 p=21516 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672873.18-193512207263064/source", "state": "file", "uid": 0} >2018-06-22 09:07:53,973 p=21516 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672873.31-103800701222742/source", "state": "file", "uid": 0} >2018-06-22 09:07:54,027 p=21516 u=mistral | changed: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672873.37-7236517177602/source", "state": "file", "uid": 0} >2018-06-22 09:07:54,155 p=21516 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-06-22 09:07:54,550 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:54,561 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:54,570 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-06-22 09:07:54,593 p=21516 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-06-22 09:07:55,288 p=21516 u=mistral | changed: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672874.68-57056320511206/source", "state": "file", "uid": 0} >2018-06-22 09:07:55,333 p=21516 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672874.68-126414552109481/source", "state": "file", "uid": 0} >2018-06-22 09:07:55,461 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672874.81-237197221296172/source", "state": "file", "uid": 0} >2018-06-22 09:07:55,981 p=21516 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": true, "checksum": "40f9ceb4dd2fc8e9c51bf5152a0fa8e1d16d9137", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "md5sum": "9cd3c2dc0153b127d70141dadfabd12c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 175, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672875.34-32748756019949/source", "state": "file", "uid": 0} >2018-06-22 09:07:56,099 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/keystone.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/keystone.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672875.47-3096509871820/source", "state": "file", "uid": 0} >2018-06-22 09:07:56,623 p=21516 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": true, "checksum": "b50cbe1f8b020aa49249248b57310f45005813b3", "dest": "/var/lib/kolla/config_files/nova_libvirt.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "8356787bbcfcb5674a0bf2570719654a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 512, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672875.99-197400204744414/source", "state": "file", "uid": 0} >2018-06-22 09:07:56,735 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": true, "checksum": "0e697e31bdc439b99552bac9ffe0bab07f2af4a4", "dest": "/var/lib/kolla/config_files/cinder_backup.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "8e107eb8f6989be8375a0ff2dd5b4d57", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672876.11-193375438864698/source", "state": "file", "uid": 0} >2018-06-22 09:07:57,268 p=21516 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': '/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": true, "checksum": "6a0a936a324363cd605e22c2327c17deb6dfbec2", "dest": "/var/lib/kolla/config_files/nova-migration-target.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "md5sum": "161558d57b182ca70c6f9bbd7fcbda8a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 258, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672876.63-33393323766383/source", "state": "file", "uid": 0} >2018-06-22 09:07:57,379 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672876.74-86779734158358/source", "state": "file", "uid": 0} >2018-06-22 09:07:57,910 p=21516 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': '/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": true, "checksum": "8bbfe195e54ddfe481aaad9744174f7344d49681", "dest": "/var/lib/kolla/config_files/nova_virtlogd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "786b962e2df778e3ce02b185ef93deac", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 193, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672877.27-252849606962301/source", "state": "file", "uid": 0} >2018-06-22 09:07:58,022 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": true, "checksum": "413730fbf3f7935085cfda60cbc1535d8bce0caf", "dest": "/var/lib/kolla/config_files/swift_account_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "dfccd947a56ceb6fa2b71c400281a365", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 200, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672877.39-124292507911638/source", "state": "file", "uid": 0} >2018-06-22 09:07:58,567 p=21516 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": true, "checksum": "bd1c4f0459f65e7f67a969a89c74a8b8cdcfd9f8", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "3599cf6b814b7c628c2887996ca46138", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672877.92-71038806444505/source", "state": "file", "uid": 0} >2018-06-22 09:07:58,672 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": true, "checksum": "2bf5ca66cb377c9fa3e6880f8b078d1312470cde", "dest": "/var/lib/kolla/config_files/swift_account_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d4a857b7e18f40f1cc1e6fd265c89770", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 203, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672878.03-259873939779436/source", "state": "file", "uid": 0} >2018-06-22 09:07:59,217 p=21516 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/var/lib/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": true, "checksum": "bb1c3bcd199b74791ea32746c08f4925a3b585a2", "dest": "/var/lib/kolla/config_files/nova_compute.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/var/lib/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "70b809037933259f45bb1585e9e6a4cc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 643, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672878.57-75770427508440/source", "state": "file", "uid": 0} >2018-06-22 09:07:59,329 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": true, "checksum": "e01d19d7f7cff24dfcc0d132b7d8ceabba199142", "dest": "/var/lib/kolla/config_files/aodh_notifier.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "5d4a748030a9a7476ccbd8902fb654fc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 244, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672878.68-206641208427890/source", "state": "file", "uid": 0} >2018-06-22 09:07:59,876 p=21516 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": true, "checksum": "4b3e97fcd87fd70b35934d1ef908747f302a4d11", "dest": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d91832a36a0ad3616a4e78c1af7d0db5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 237, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672879.22-124839631775988/source", "state": "file", "uid": 0} >2018-06-22 09:07:59,969 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": true, "checksum": "23416bae23a2c08d2c534f76d19f8c4bad40ee92", "dest": "/var/lib/kolla/config_files/nova_scheduler.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "d00e4198d95dede3f0b6ac351d57a982", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672879.34-131078962627656/source", "state": "file", "uid": 0} >2018-06-22 09:08:00,560 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": true, "checksum": "a13a92b47f931e2e89d7e4bf5057a4307ab9cd45", "dest": "/var/lib/kolla/config_files/heat_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "e671c4783cc86fb2ad300fcd11b2f99b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672879.98-76267642847645/source", "state": "file", "uid": 0} >2018-06-22 09:08:01,135 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': '/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": true, "checksum": "da289f102f641cdd0a02df41c443d7d8387741a5", "dest": "/var/lib/kolla/config_files/neutron_dhcp.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "md5sum": "c5975567082648a9da814c433c49f2d6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 875, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672880.57-247334475737362/source", "state": "file", "uid": 0} >2018-06-22 09:08:01,715 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/haproxy.json'}) => {"changed": true, "checksum": "0801385cb9292b3b6eb8440166435242bd90e288", "dest": "/var/lib/kolla/config_files/haproxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "md5sum": "a2742f7abd50bb0af0a4ba55b2f1f4ff", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 648, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672881.14-221629490577758/source", "state": "file", "uid": 0} >2018-06-22 09:08:02,302 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": true, "checksum": "c1a1552a71f4daefebff5234f9d8ba71f4c64d76", "dest": "/var/lib/kolla/config_files/nova_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "6b8ef057a2e5539eacd9f29fc4b94036", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672881.72-159729250209226/source", "state": "file", "uid": 0} >2018-06-22 09:08:02,887 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": true, "checksum": "a6d2eb62af2f11437c704d13adf72d498324ce2a", "dest": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "d586f0c2ff043bece10efff986d635a3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 531, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672882.31-102533383748617/source", "state": "file", "uid": 0} >2018-06-22 09:08:03,468 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": true, "checksum": "b061cf7478060add5d079aafaeae81b445251a8f", "dest": "/var/lib/kolla/config_files/swift_account_reaper.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "0f3bbe74ca95c8cca321ee32e2aff7d1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672882.9-274062817812152/source", "state": "file", "uid": 0} >2018-06-22 09:08:04,054 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": true, "checksum": "b7397fff831b47db0b6111663d816a64a389cb25", "dest": "/var/lib/kolla/config_files/sahara-engine.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "md5sum": "ac2c7a84fc46a1f1d128201ce5b67c2d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 360, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672883.48-75241712917675/source", "state": "file", "uid": 0} >2018-06-22 09:08:04,651 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/redis.json'}) => {"changed": true, "checksum": "66d6d6bd51aaa0c100cdfc7688267a4342c7859f", "dest": "/var/lib/kolla/config_files/redis.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "md5sum": "ceafff1d742633f8759bdb1af0e3ebd4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 843, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672884.06-61943706255767/source", "state": "file", "uid": 0} >2018-06-22 09:08:05,250 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": true, "checksum": "b64555136537c36af22340fb15f21f0e01ac3495", "dest": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "557a4e9522f54cfbd6456516e67f4971", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 271, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672884.66-221143258888635/source", "state": "file", "uid": 0} >2018-06-22 09:08:05,829 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/glance_api.json'}) => {"changed": true, "checksum": "2a93405ac579e31c6e5732983f3d7dd8bed55b33", "dest": "/var/lib/kolla/config_files/glance_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "30c5fe40dffc304e7edeab4019e96e92", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 556, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672885.26-219297722165897/source", "state": "file", "uid": 0} >2018-06-22 09:08:06,404 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": true, "checksum": "739f6562d3ea24561c6d8bcf37041a9eac928257", "dest": "/var/lib/kolla/config_files/swift_container_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "b63816c7c08aef58249d13b65b387da6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 204, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672885.84-19435690969178/source", "state": "file", "uid": 0} >2018-06-22 09:08:06,987 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": true, "checksum": "98adef088b2ae2648ac88b812890957ec54eff13", "dest": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "md5sum": "4a38c9578181c292891f5f7bdb9f791b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 428, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672886.41-135516138220332/source", "state": "file", "uid": 0} >2018-06-22 09:08:07,566 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": true, "checksum": "ebbb7ee6895cea2b9278f33e888881d3d3f1a68a", "dest": "/var/lib/kolla/config_files/swift_object_expirer.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "e4bf891d8ffc9a015be201a6ef0d5abc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672887.0-19670375610097/source", "state": "file", "uid": 0} >2018-06-22 09:08:08,148 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": true, "checksum": "53d52f7d52f0fb3da33de2c20414eb3248593fdd", "dest": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "2863f917d7ada51e9570fb53bb363eed", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 237, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672887.57-183122189980485/source", "state": "file", "uid": 0} >2018-06-22 09:08:08,727 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api.json'}) => {"changed": true, "checksum": "454582321236a137f78205f328bae190c02f06b0", "dest": "/var/lib/kolla/config_files/heat_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "c04ac0476ee6639fadf252b0e9d9649b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672888.16-75897643033038/source", "state": "file", "uid": 0} >2018-06-22 09:08:09,311 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': '/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": true, "checksum": "44a8f1a58092190d553d3f589cab9ae566f8dc81", "dest": "/var/lib/kolla/config_files/swift_rsync.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "886febadf691905adf0c129f3aa0197a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 200, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672888.73-64978223565829/source", "state": "file", "uid": 0} >2018-06-22 09:08:09,889 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": true, "checksum": "279b64a7d6914d2a03c86c703f53e3d71b1daef1", "dest": "/var/lib/kolla/config_files/swift_account_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "b41d67c146c800142c5405fe5a0b332e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672889.32-240736741059183/source", "state": "file", "uid": 0} >2018-06-22 09:08:10,467 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": true, "checksum": "06055a69fec2bc513b4c86ceb654a5fc29bd0866", "dest": "/var/lib/kolla/config_files/cinder_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "801aba1299d99bfd7e63f66ca7a4ba40", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672889.9-118236195258704/source", "state": "file", "uid": 0} >2018-06-22 09:08:11,061 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": true, "checksum": "a0874b803c5238a4eeb12b1265d5d1db93c0d3d4", "dest": "/var/lib/kolla/config_files/swift_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "a38e4e3ae519b3b0824e19184e521b36", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 195, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672890.48-139282097856156/source", "state": "file", "uid": 0} >2018-06-22 09:08:11,644 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": true, "checksum": "8dbfc3669a6d79fb30702be502ced7501500480a", "dest": "/var/lib/kolla/config_files/swift_container_updater.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "a697319d04392dc572dff6236144571f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 204, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672891.07-47669526449539/source", "state": "file", "uid": 0} >2018-06-22 09:08:12,223 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': '/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": true, "checksum": "3c87335a28b992f90769aea9ea62fb610f8236f1", "dest": "/var/lib/kolla/config_files/clustercheck.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d74434e7b8bcaca0b227152346c13db8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 165, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672891.65-184555537839291/source", "state": "file", "uid": 0} >2018-06-22 09:08:12,814 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/mysql.json'}) => {"changed": true, "checksum": "b52f0d28ed1ac134c64994c08b3f2378e8dff494", "dest": "/var/lib/kolla/config_files/mysql.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "md5sum": "4d15ed291dbe96e88b9a128b0e5c99e9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 687, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672892.23-216698926617779/source", "state": "file", "uid": 0} >2018-06-22 09:08:13,418 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": true, "checksum": "d061b71e9106733354c297cbb7b327a22e476de5", "dest": "/var/lib/kolla/config_files/nova_placement.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "941db485b7079f2f0e008e1bdff8e45f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672892.82-136913683153943/source", "state": "file", "uid": 0} >2018-06-22 09:08:13,997 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": true, "checksum": "fd070eb1bdc97442fddc24f503fe5e3251b89e28", "dest": "/var/lib/kolla/config_files/sahara-api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "md5sum": "bd52668d37c227cc00c418bbe889ab90", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 357, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672893.43-186902532430971/source", "state": "file", "uid": 0} >2018-06-22 09:08:14,601 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": true, "checksum": "f4177197cb07127689ae10a60020efa3a5e0d457", "dest": "/var/lib/kolla/config_files/aodh_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "582326e52a94260e71a4a19dc4d75191", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672894.01-98954179694226/source", "state": "file", "uid": 0} >2018-06-22 09:08:15,171 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": true, "checksum": "815ba71e0584cb12e7d40f794603c6bfb1800626", "dest": "/var/lib/kolla/config_files/keystone_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "md5sum": "b3b3bbd6499e09c424665311a5e66136", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 252, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672894.61-272757917264002/source", "state": "file", "uid": 0} >2018-06-22 09:08:15,754 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672895.18-89497837883748/source", "state": "file", "uid": 0} >2018-06-22 09:08:16,335 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": true, "checksum": "659d25615392d81b2f6bc001067232495de4d6ac", "dest": "/var/lib/kolla/config_files/swift_object_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "cdea8a372a87263d5fc44b482867a705", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 201, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672895.76-91172060007456/source", "state": "file", "uid": 0} >2018-06-22 09:08:16,921 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": true, "checksum": "01a54792c74d0ebd057e8d0f44e6e8e619283e62", "dest": "/var/lib/kolla/config_files/nova_conductor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "ccbba0ad7a926ceca2bf858b8a9cc376", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672896.34-219481795734885/source", "state": "file", "uid": 0} >2018-06-22 09:08:17,509 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": true, "checksum": "454582321236a137f78205f328bae190c02f06b0", "dest": "/var/lib/kolla/config_files/heat_api_cfn.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "c04ac0476ee6639fadf252b0e9d9649b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672896.93-49410834217429/source", "state": "file", "uid": 0} >2018-06-22 09:08:18,105 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": true, "checksum": "edb529183cc509ea82818edf4d88e3650b5ffc57", "dest": "/var/lib/kolla/config_files/nova_metadata.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "45129bd8b5b9aef067edb558a9fb2c68", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 249, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672897.52-43249536082587/source", "state": "file", "uid": 0} >2018-06-22 09:08:18,698 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": true, "checksum": "bd1c4f0459f65e7f67a969a89c74a8b8cdcfd9f8", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "3599cf6b814b7c628c2887996ca46138", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672898.11-98529641182131/source", "state": "file", "uid": 0} >2018-06-22 09:08:19,296 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": true, "checksum": "205ddacf194881a04c54779e3049b3c59ef6c4af", "dest": "/var/lib/kolla/config_files/rabbitmq.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "md5sum": "1097dade2a2355fd51207668004d093d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 792, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672898.71-58808320858741/source", "state": "file", "uid": 0} >2018-06-22 09:08:19,865 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": true, "checksum": "a960878859377dfae6334d9b7eaa9f554ab31798", "dest": "/var/lib/kolla/config_files/nova_consoleauth.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "2a66fc646aae3e5913e0598ccef3881f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 248, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672899.3-111333432800198/source", "state": "file", "uid": 0} >2018-06-22 09:08:20,454 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": true, "checksum": "4f7a34f38afe301f885e25eb10225c461ab1d0b1", "dest": "/var/lib/kolla/config_files/swift_object_updater.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "71a7e788486d505cfec645da0ac337cd", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672899.87-32274736227691/source", "state": "file", "uid": 0} >2018-06-22 09:08:21,038 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": true, "checksum": "5a73d3b7ef652341120c9298683d3a26f3fb668b", "dest": "/var/lib/kolla/config_files/neutron_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "c48346aa3f8c096826ebab378db9dfb9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 549, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672900.45-278455343330569/source", "state": "file", "uid": 0} >2018-06-22 09:08:21,620 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": true, "checksum": "9ec49193a63036ecf32a1479eabdac05dcab06e0", "dest": "/var/lib/kolla/config_files/cinder_scheduler.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "93e9da0d08550be0ed30576cefdfbfbb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 340, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672901.05-114887057257242/source", "state": "file", "uid": 0} >2018-06-22 09:08:22,209 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": true, "checksum": "c8763a8c16702042afe553b54212340d800e1509", "dest": "/var/lib/kolla/config_files/gnocchi_metricd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "db9bd25aa2fcd2845d442869e986e7d8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 471, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672901.63-9914015591227/source", "state": "file", "uid": 0} >2018-06-22 09:08:22,813 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": true, "checksum": "fe01b9d48d08f239bbf9acf7e2a1492397180c8e", "dest": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "md5sum": "a26f6acfc823d6e2e5b34367b859c8fa", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 617, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672902.22-241185068419646/source", "state": "file", "uid": 0} >2018-06-22 09:08:23,385 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": true, "checksum": "a418eddca731078cfd8fe2fda7ee64d9ffaf7dda", "dest": "/var/lib/kolla/config_files/swift_container_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "930bbe0f8c13b55f664fb3a89dfa1613", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 207, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672902.82-56871283387222/source", "state": "file", "uid": 0} >2018-06-22 09:08:23,974 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": true, "checksum": "fe3989178a2ea434bae6dfd64b04423e3ea005bc", "dest": "/var/lib/kolla/config_files/heat_engine.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "aee05ebc54399dde3dfc3577c3431a92", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 322, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672903.39-106522253055875/source", "state": "file", "uid": 0} >2018-06-22 09:08:24,566 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api.json'}) => {"changed": true, "checksum": "d061b71e9106733354c297cbb7b327a22e476de5", "dest": "/var/lib/kolla/config_files/nova_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "941db485b7079f2f0e008e1bdff8e45f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672903.98-259636630247820/source", "state": "file", "uid": 0} >2018-06-22 09:08:25,161 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": true, "checksum": "460cdcfbcfac45a30b03df89ac84d2f34db64d72", "dest": "/var/lib/kolla/config_files/swift_object_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "md5sum": "b00c233fd2cd32c68e429e42918b8245", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 285, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672904.57-246246813679721/source", "state": "file", "uid": 0} >2018-06-22 09:08:25,729 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': '/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": true, "checksum": "80800f9f267aaf3497499af70b7945e3b6ae771b", "dest": "/var/lib/kolla/config_files/redis_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "c45d2764863cc585b994d432412ff9e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 172, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672905.17-6049292502842/source", "state": "file", "uid": 0} >2018-06-22 09:08:26,306 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": true, "checksum": "39f33531116fbcba7a5d9c1cbbc32f4af5e6b981", "dest": "/var/lib/kolla/config_files/gnocchi_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "5e924ffe736d942bf904a791bf5b5af2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 475, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672905.74-106777310885251/source", "state": "file", "uid": 0} >2018-06-22 09:08:26,873 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": true, "checksum": "7f36445e4c6eb403ce919ca3adee771d4cb3bcce", "dest": "/var/lib/kolla/config_files/cinder_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "bb3e2e5741eb3e5b6c53da835e66d00d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 256, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672906.31-159743735052060/source", "state": "file", "uid": 0} >2018-06-22 09:08:27,455 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": true, "checksum": "e800a0e1c86f8fa7a41efbf24ce38f48a458ba51", "dest": "/var/lib/kolla/config_files/cinder_volume.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "a85ec43ba623807ac022c04663fa68f5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 579, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672906.88-5527905881579/source", "state": "file", "uid": 0} >2018-06-22 09:08:28,035 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/panko_api.json'}) => {"changed": true, "checksum": "2db8f01174b9c2aa3a180add472b54891aed5cd6", "dest": "/var/lib/kolla/config_files/panko_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "md5sum": "7d9530934c938a4c96f71797957f7ca8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672907.46-195963354691334/source", "state": "file", "uid": 0} >2018-06-22 09:08:28,627 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": true, "checksum": "fbcdad9219733b81ad969426553906c1a8648897", "dest": "/var/lib/kolla/config_files/swift_object_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "45f7348541b64a76aec07477ea1d7358", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672908.04-245705913447609/source", "state": "file", "uid": 0} >2018-06-22 09:08:29,201 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": true, "checksum": "cd233477dc9defd8028ac1a8fe736b8c9fcde9f8", "dest": "/var/lib/kolla/config_files/neutron_l3_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "md5sum": "b47a8dc2601f0e1c404b9009d1c99c32", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 634, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672908.63-14788685215428/source", "state": "file", "uid": 0} >2018-06-22 09:08:29,777 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": true, "checksum": "a7135286aba5eb111dc77c913fc1f7dc0977e783", "dest": "/var/lib/kolla/config_files/aodh_listener.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "ff2b7ae2bb8061a36a8223f5c34a970b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 244, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672909.21-26491541266629/source", "state": "file", "uid": 0} >2018-06-22 09:08:30,356 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": true, "checksum": "1f5cc060becbca7be3515f39537993b91e109a6d", "dest": "/var/lib/kolla/config_files/swift_container_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "59a9944c2c3c07fec0293d2efd7d8082", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 203, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672909.78-224509979687819/source", "state": "file", "uid": 0} >2018-06-22 09:08:30,939 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": true, "checksum": "596ee1b7f45471d04a0bc3d985f82ad722631b98", "dest": "/var/lib/kolla/config_files/aodh_evaluator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "94c5432632bf2acca69de0063414183b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 245, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672910.37-189681707514404/source", "state": "file", "uid": 0} >2018-06-22 09:08:31,527 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672910.95-47568236788760/source", "state": "file", "uid": 0} >2018-06-22 09:08:32,127 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": true, "checksum": "40f9ceb4dd2fc8e9c51bf5152a0fa8e1d16d9137", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "md5sum": "9cd3c2dc0153b127d70141dadfabd12c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 175, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672911.54-180703888390293/source", "state": "file", "uid": 0} >2018-06-22 09:08:32,715 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": true, "checksum": "1a38774f0fed561a8f1ad8c7f0a976a71a7f7008", "dest": "/var/lib/kolla/config_files/gnocchi_statsd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "b98425b2f26d4e30448a72685b1f89ad", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 470, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672912.14-267912456900200/source", "state": "file", "uid": 0} >2018-06-22 09:08:33,320 p=21516 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': '/var/lib/kolla/config_files/horizon.json'}) => {"changed": true, "checksum": "fc55910103403d0bb92e62e940dbd536aff43f84", "dest": "/var/lib/kolla/config_files/horizon.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "md5sum": "77504b6ea1f544f3c70dbc4115bfc354", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 587, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672912.72-132994593582782/source", "state": "file", "uid": 0} >2018-06-22 09:08:33,378 p=21516 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-06-22 09:08:33,388 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:08:33,413 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:08:33,435 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:08:33,459 p=21516 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-06-22 09:08:34,104 p=21516 u=mistral | changed: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4'}], 'key': u'step_3'}) => {"changed": true, "checksum": "730e4e048205e1fadc6cd518326d4622d77edad6", "dest": "/var/lib/docker-puppet/docker-puppet-tasks3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "md5sum": "56e31c6a27d11dc618833f5679009c9d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672913.51-270204853764210/source", "state": "file", "uid": 0} >2018-06-22 09:08:34,128 p=21516 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-06-22 09:08:34,155 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:08:34,177 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:08:34,192 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:08:34,213 p=21516 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-06-22 09:08:34,889 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672914.31-83273421802473/source", "state": "file", "uid": 0} >2018-06-22 09:08:34,891 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672914.25-54553999855676/source", "state": "file", "uid": 0} >2018-06-22 09:08:34,893 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529672914.27-6618609173681/source", "state": "file", "uid": 0} >2018-06-22 09:08:34,920 p=21516 u=mistral | TASK [Run puppet host configuration for step 1] ******************************** >2018-06-22 09:08:49,861 p=21516 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:08:50,614 p=21516 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:09:59,504 p=21516 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:09:59,527 p=21516 u=mistral | TASK [Debug output for task which failed: Run puppet host configuration for step 1] *** >2018-06-22 09:09:59,654 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.73 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Notice: /Stage[main]/Timezone/Exec[update_timezone]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}8cd5ea7a71047b590f89d618413c6eb5'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Service/Service[pcsd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/password: changed password", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/groups: groups changed '' to ['haclient']", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/ensure: defined content as '{md5}a839b1ab3552f629efbcc7aaf42e7964'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Service/Service[corosync]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pacemaker]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: executed successfully", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Pacemaker::Stonith/Pacemaker::Property[Disable STONITH]/Pcmk_property[property--stonith-enabled]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 74.31 seconds", > "Changes:", > " Total: 166", > "Events:", > " Success: 166", > "Resources:", > " Changed: 165", > " Out of sync: 165", > " Total: 216", > " Restarted: 5", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " File line: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " User: 0.05", > " File: 0.11", > " Sysctl: 0.15", > " Sysctl runtime: 0.21", > " Package: 0.39", > " Pcmk property: 1.01", > " Firewall: 14.51", > " Last run: 1529672998", > " Service: 2.58", > " Config retrieval: 3.22", > " Exec: 52.48", > " Filebucket: 0.00", > " Total: 74.74", > "Version:", > " Config: 1529672921", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:09:59,677 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.96 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/Exec[update_timezone]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}8cd5ea7a71047b590f89d618413c6eb5'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 6.98 seconds", > "Changes:", > " Total: 98", > "Events:", > " Success: 98", > "Resources:", > " Total: 141", > " Restarted: 3", > " Out of sync: 98", > " Changed: 98", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " Sysctl: 0.07", > " File: 0.16", > " Sysctl runtime: 0.21", > " Package: 0.24", > " Exec: 0.92", > " Service: 1.20", > " Last run: 1529672930", > " Config retrieval: 2.27", > " Firewall: 2.82", > " Total: 7.92", > " Filebucket: 0.00", > "Version:", > " Config: 1529672920", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:09:59,689 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.87 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}c1d92fa159fef3afd721be5f86af886d'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/Exec[update_timezone]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}8cd5ea7a71047b590f89d618413c6eb5'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '65536' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '65536' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 6.97 seconds", > "Changes:", > " Total: 92", > "Events:", > " Success: 92", > "Resources:", > " Total: 135", > " Restarted: 3", > " Out of sync: 92", > " Changed: 92", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " Sysctl: 0.11", > " File: 0.15", > " Sysctl runtime: 0.18", > " Package: 0.24", > " Service: 1.30", > " Firewall: 1.76", > " Last run: 1529672929", > " Exec: 2.00", > " Config retrieval: 2.16", > " Total: 7.92", > " Concat fragment: 0.00", > "Version:", > " Config: 1529672920", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:09:59,713 p=21516 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 1] ***************** >2018-06-22 09:10:20,001 p=21516 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:10:52,378 p=21516 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:12:32,214 p=21516 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:12:32,238 p=21516 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 1] *** >2018-06-22 09:12:32,361 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-06-22 13:10:00,239 INFO: 27969 -- Running docker-puppet", > "2018-06-22 13:10:00,240 DEBUG: 27969 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-06-22 13:10:00,240 DEBUG: 27969 -- config_volume crond", > "2018-06-22 13:10:00,240 DEBUG: 27969 -- puppet_tags ", > "2018-06-22 13:10:00,240 DEBUG: 27969 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-22 13:10:00,240 DEBUG: 27969 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:10:00,240 DEBUG: 27969 -- volumes []", > "2018-06-22 13:10:00,241 DEBUG: 27969 -- Adding new service", > "2018-06-22 13:10:00,241 INFO: 27969 -- Service compilation completed.", > "2018-06-22 13:10:00,241 DEBUG: 27969 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', []]", > "2018-06-22 13:10:00,241 INFO: 27969 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-06-22 13:10:00,254 INFO: 27970 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:10:00,254 DEBUG: 27970 -- config_volume crond", > "2018-06-22 13:10:00,254 DEBUG: 27970 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-06-22 13:10:00,255 DEBUG: 27970 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-22 13:10:00,255 DEBUG: 27970 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:10:00,255 DEBUG: 27970 -- volumes []", > "2018-06-22 13:10:00,256 INFO: 27970 -- Removing container: docker-puppet-crond", > "2018-06-22 13:10:00,335 INFO: 27970 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:10:13,071 DEBUG: 27970 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "e0f71f706c2a: Pulling fs layer", > "121ab4741000: Pulling fs layer", > "a8ff0031dfcb: Pulling fs layer", > "a94d9ea04263: Pulling fs layer", > "a94d9ea04263: Waiting", > "121ab4741000: Download complete", > "a94d9ea04263: Verifying Checksum", > "a94d9ea04263: Download complete", > "e0f71f706c2a: Verifying Checksum", > "e0f71f706c2a: Download complete", > "a8ff0031dfcb: Verifying Checksum", > "a8ff0031dfcb: Download complete", > "e0f71f706c2a: Pull complete", > "121ab4741000: Pull complete", > "a8ff0031dfcb: Pull complete", > "a94d9ea04263: Pull complete", > "Digest: sha256:cbc58f1f133447db6c3e634ca05251825f6a2ede8528959b5cd6e0cb1c3de3ba", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "", > "2018-06-22 13:10:13,074 DEBUG: 27970 -- NET_HOST enabled", > "2018-06-22 13:10:13,074 DEBUG: 27970 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=ceph-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpg6J4Q6:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:10:19,868 DEBUG: 27970 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 0.48 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}13ae5d5b43716a32da6855edd3f15758'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.00", > " Cron: 0.01", > " Config retrieval: 0.54", > " Total: 0.56", > " Last run: 1529673019", > "Version:", > " Config: 1529673018", > " Puppet: 4.8.2", > "Gathering files modified after 2018-06-22 13:10:13.328341331 +0000", > "2018-06-22 13:10:19,868 DEBUG: 27970 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=ceph-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:13.328341331 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ md5sum", > "tar: Removing leading `/' from member names", > "+ awk '{print $1}'", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-06-22 13:10:19,869 INFO: 27970 -- Removing container: docker-puppet-crond", > "2018-06-22 13:10:19,905 DEBUG: 27970 -- docker-puppet-crond", > "2018-06-22 13:10:19,905 INFO: 27970 -- Finished processing puppet configs for crond", > "2018-06-22 13:10:19,906 DEBUG: 27969 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-06-22 13:10:19,906 DEBUG: 27969 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-06-22 13:10:19,909 DEBUG: 27969 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-22 13:10:19,909 DEBUG: 27969 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-22 13:10:19,909 DEBUG: 27969 -- Updating config hash for logrotate_crond, config_volume=crond hash=dc3734c3dca39c4392038e41c43f7286" > ] >} >2018-06-22 09:12:32,411 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-06-22 13:10:00,225 INFO: 32006 -- Running docker-puppet", > "2018-06-22 13:10:00,225 DEBUG: 32006 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-06-22 13:10:00,226 DEBUG: 32006 -- config_volume ceilometer", > "2018-06-22 13:10:00,226 DEBUG: 32006 -- puppet_tags ceilometer_config", > "2018-06-22 13:10:00,226 DEBUG: 32006 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "", > "2018-06-22 13:10:00,226 DEBUG: 32006 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 13:10:00,226 DEBUG: 32006 -- volumes []", > "2018-06-22 13:10:00,226 DEBUG: 32006 -- Adding new service", > "2018-06-22 13:10:00,226 DEBUG: 32006 -- config_volume neutron", > "2018-06-22 13:10:00,226 DEBUG: 32006 -- puppet_tags neutron_plugin_ml2", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- volumes []", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- Adding new service", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- config_volume neutron", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- Existing service, appending puppet tags and manifest", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- config_volume iscsid", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- puppet_tags iscsid_config", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- config_volume nova_libvirt", > "2018-06-22 13:10:00,227 DEBUG: 32006 -- puppet_tags nova_config,nova_paste_api_ini", > "2018-06-22 13:10:00,228 DEBUG: 32006 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "# We'll probably treat it like we do with Neutron plugins.", > "# Until then, just include it in the default nova-compute role.", > "include tripleo::profile::base::nova::compute::libvirt", > "include ::tripleo::profile::base::database::mysql::client", > "2018-06-22 13:10:00,228 DEBUG: 32006 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-22 13:10:00,228 DEBUG: 32006 -- volumes []", > "2018-06-22 13:10:00,228 DEBUG: 32006 -- Adding new service", > "2018-06-22 13:10:00,228 DEBUG: 32006 -- config_volume nova_libvirt", > "2018-06-22 13:10:00,228 DEBUG: 32006 -- puppet_tags libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-06-22 13:10:00,228 DEBUG: 32006 -- manifest include tripleo::profile::base::nova::libvirt", > "2018-06-22 13:10:00,228 DEBUG: 32006 -- Existing service, appending puppet tags and manifest", > "2018-06-22 13:10:00,228 DEBUG: 32006 -- puppet_tags ", > "2018-06-22 13:10:00,228 DEBUG: 32006 -- manifest include ::tripleo::profile::base::sshd", > "include tripleo::profile::base::nova::migration::target", > "2018-06-22 13:10:00,228 DEBUG: 32006 -- config_volume crond", > "2018-06-22 13:10:00,229 DEBUG: 32006 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-22 13:10:00,229 DEBUG: 32006 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:10:00,229 DEBUG: 32006 -- volumes []", > "2018-06-22 13:10:00,229 DEBUG: 32006 -- Adding new service", > "2018-06-22 13:10:00,229 INFO: 32006 -- Service compilation completed.", > "2018-06-22 13:10:00,230 DEBUG: 32006 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', []]", > "2018-06-22 13:10:00,230 DEBUG: 32006 -- - [u'nova_libvirt', u'file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password', u\"# TODO(emilien): figure how to deal with libvirt profile.\\n# We'll probably treat it like we do with Neutron plugins.\\n# Until then, just include it in the default nova-compute role.\\ninclude tripleo::profile::base::nova::compute::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sshd\\ninclude tripleo::profile::base::nova::migration::target\", u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', []]", > "2018-06-22 13:10:00,230 DEBUG: 32006 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', []]", > "2018-06-22 13:10:00,230 DEBUG: 32006 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']]", > "2018-06-22 13:10:00,230 DEBUG: 32006 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', [u'/etc/iscsi:/etc/iscsi']]", > "2018-06-22 13:10:00,230 INFO: 32006 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-06-22 13:10:00,242 INFO: 32007 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 13:10:00,242 INFO: 32008 -- Starting configuration of nova_libvirt using image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-22 13:10:00,243 DEBUG: 32007 -- config_volume ceilometer", > "2018-06-22 13:10:00,243 DEBUG: 32008 -- config_volume nova_libvirt", > "2018-06-22 13:10:00,243 DEBUG: 32007 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config", > "2018-06-22 13:10:00,243 DEBUG: 32008 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-06-22 13:10:00,243 DEBUG: 32008 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "include tripleo::profile::base::nova::libvirt", > "include ::tripleo::profile::base::sshd", > "2018-06-22 13:10:00,243 DEBUG: 32007 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-06-22 13:10:00,243 DEBUG: 32007 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 13:10:00,243 DEBUG: 32008 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-22 13:10:00,243 DEBUG: 32007 -- volumes []", > "2018-06-22 13:10:00,243 DEBUG: 32008 -- volumes []", > "2018-06-22 13:10:00,243 INFO: 32009 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:10:00,244 DEBUG: 32009 -- config_volume crond", > "2018-06-22 13:10:00,244 DEBUG: 32009 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-06-22 13:10:00,244 DEBUG: 32009 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-22 13:10:00,244 DEBUG: 32009 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:10:00,244 DEBUG: 32009 -- volumes []", > "2018-06-22 13:10:00,245 INFO: 32008 -- Removing container: docker-puppet-nova_libvirt", > "2018-06-22 13:10:00,245 INFO: 32007 -- Removing container: docker-puppet-ceilometer", > "2018-06-22 13:10:00,245 INFO: 32009 -- Removing container: docker-puppet-crond", > "2018-06-22 13:10:00,343 INFO: 32008 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-22 13:10:00,344 INFO: 32009 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:10:00,350 INFO: 32007 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 13:10:12,907 DEBUG: 32009 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "e0f71f706c2a: Pulling fs layer", > "121ab4741000: Pulling fs layer", > "a8ff0031dfcb: Pulling fs layer", > "a94d9ea04263: Pulling fs layer", > "a94d9ea04263: Waiting", > "121ab4741000: Verifying Checksum", > "121ab4741000: Download complete", > "a8ff0031dfcb: Verifying Checksum", > "a8ff0031dfcb: Download complete", > "e0f71f706c2a: Verifying Checksum", > "e0f71f706c2a: Download complete", > "a94d9ea04263: Verifying Checksum", > "a94d9ea04263: Download complete", > "e0f71f706c2a: Pull complete", > "121ab4741000: Pull complete", > "a8ff0031dfcb: Pull complete", > "a94d9ea04263: Pull complete", > "Digest: sha256:cbc58f1f133447db6c3e634ca05251825f6a2ede8528959b5cd6e0cb1c3de3ba", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:10:12,911 DEBUG: 32009 -- NET_HOST enabled", > "2018-06-22 13:10:12,911 DEBUG: 32009 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpDVyWTo:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:10:20,167 DEBUG: 32007 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "c66228eb2ac7: Pulling fs layer", > "333aa6b2b383: Pulling fs layer", > "1eb9ef5adcb4: Pulling fs layer", > "c66228eb2ac7: Waiting", > "333aa6b2b383: Waiting", > "1eb9ef5adcb4: Waiting", > "c66228eb2ac7: Verifying Checksum", > "c66228eb2ac7: Download complete", > "333aa6b2b383: Verifying Checksum", > "333aa6b2b383: Download complete", > "1eb9ef5adcb4: Verifying Checksum", > "1eb9ef5adcb4: Download complete", > "c66228eb2ac7: Pull complete", > "333aa6b2b383: Pull complete", > "1eb9ef5adcb4: Pull complete", > "Digest: sha256:3f638e03aaf1d7e303183e06ff1627a5a0efeaef228a7be1e9667ae62d7d6a1b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 13:10:20,174 DEBUG: 32007 -- NET_HOST enabled", > "2018-06-22 13:10:20,174 DEBUG: 32007 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config --env NAME=ceilometer --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpdD8vOW:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 13:10:21,477 DEBUG: 32009 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.50 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}13ae5d5b43716a32da6855edd3f15758'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.05 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " Cron: 0.01", > " File: 0.01", > " Config retrieval: 0.61", > " Total: 0.62", > " Last run: 1529673020", > "Version:", > " Config: 1529673019", > " Puppet: 4.8.2", > "Gathering files modified after 2018-06-22 13:10:13.256433335 +0000", > "2018-06-22 13:10:21,478 DEBUG: 32009 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=compute-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:13.256433335 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ md5sum", > "+ awk '{print $1}'", > "tar: Removing leading `/' from member names", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-06-22 13:10:21,479 INFO: 32009 -- Removing container: docker-puppet-crond", > "2018-06-22 13:10:21,521 DEBUG: 32009 -- docker-puppet-crond", > "2018-06-22 13:10:21,522 INFO: 32009 -- Finished processing puppet configs for crond", > "2018-06-22 13:10:21,523 INFO: 32009 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 13:10:21,523 DEBUG: 32009 -- config_volume neutron", > "2018-06-22 13:10:21,523 DEBUG: 32009 -- puppet_tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-22 13:10:21,523 DEBUG: 32009 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "include ::tripleo::profile::base::neutron::ovs", > "2018-06-22 13:10:21,523 DEBUG: 32009 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 13:10:21,523 DEBUG: 32009 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-22 13:10:21,524 INFO: 32009 -- Removing container: docker-puppet-neutron", > "2018-06-22 13:10:21,650 INFO: 32009 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 13:10:26,972 DEBUG: 32009 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "ea1d509b6f44: Pulling fs layer", > "e9f9993bb931: Pulling fs layer", > "e9f9993bb931: Verifying Checksum", > "e9f9993bb931: Download complete", > "ea1d509b6f44: Verifying Checksum", > "ea1d509b6f44: Download complete", > "ea1d509b6f44: Pull complete", > "e9f9993bb931: Pull complete", > "Digest: sha256:af12594500608f07f8d38590e2c9b2983e5d81ae8b63aec042f36411b0e76adc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 13:10:26,975 DEBUG: 32009 -- NET_HOST enabled", > "2018-06-22 13:10:26,976 DEBUG: 32009 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp2qNoRZ:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 13:10:30,015 DEBUG: 32007 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.04 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/metering_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/filter_project]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/archive_policy]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/resources_definition_file]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 1.67 seconds", > " Total: 29", > " Success: 29", > " Total: 141", > " Skipped: 22", > " Out of sync: 29", > " Changed: 29", > " Config retrieval: 1.23", > " Ceilometer config: 1.55", > " Last run: 1529673028", > " Total: 2.78", > " Resources: 0.00", > " Config: 1529673026", > "Gathering files modified after 2018-06-22 13:10:20.511475861 +0000", > "2018-06-22 13:10:30,015 DEBUG: 32007 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config /etc/config.pp", > "Warning: ModuleLoader: module 'ceilometer' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:20.511475861 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/ceilometer --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-06-22 13:10:30,016 INFO: 32007 -- Removing container: docker-puppet-ceilometer", > "2018-06-22 13:10:30,086 DEBUG: 32007 -- docker-puppet-ceilometer", > "2018-06-22 13:10:30,086 INFO: 32007 -- Finished processing puppet configs for ceilometer", > "2018-06-22 13:10:30,087 INFO: 32007 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 13:10:30,087 DEBUG: 32007 -- config_volume iscsid", > "2018-06-22 13:10:30,087 DEBUG: 32007 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-06-22 13:10:30,087 DEBUG: 32007 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-22 13:10:30,087 DEBUG: 32007 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 13:10:30,087 DEBUG: 32007 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-22 13:10:30,087 INFO: 32007 -- Removing container: docker-puppet-iscsid", > "2018-06-22 13:10:30,172 INFO: 32007 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 13:10:30,860 DEBUG: 32007 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "ab4eae34093d: Pulling fs layer", > "ab4eae34093d: Verifying Checksum", > "ab4eae34093d: Download complete", > "ab4eae34093d: Pull complete", > "Digest: sha256:a46aa93fee87b0f173118da5c2a18dc271772adb839a481ec07f2a53534ac53c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 13:10:30,863 DEBUG: 32007 -- NET_HOST enabled", > "2018-06-22 13:10:30,863 DEBUG: 32007 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpMxi3jd:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 13:10:35,138 DEBUG: 32008 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-compute ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-compute", > "0e3031608420: Pulling fs layer", > "9c13697fe587: Pulling fs layer", > "0e3031608420: Waiting", > "9c13697fe587: Waiting", > "0e3031608420: Verifying Checksum", > "0e3031608420: Download complete", > "9c13697fe587: Verifying Checksum", > "9c13697fe587: Download complete", > "0e3031608420: Pull complete", > "9c13697fe587: Pull complete", > "Digest: sha256:c6b75506ba5602b470f8dbfdcc57e0bcd20fc363d265aa234469343e439fa65a", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-22 13:10:35,141 DEBUG: 32008 -- NET_HOST enabled", > "2018-06-22 13:10:35,141 DEBUG: 32008 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_libvirt --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpa1ShqB:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", > "2018-06-22 13:10:37,568 DEBUG: 32007 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.55 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > "Notice: Applied catalog in 0.08 seconds", > " Total: 10", > " Skipped: 8", > " File: 0.00", > " Exec: 0.02", > " Total: 0.63", > " Last run: 1529673036", > " Config: 1529673036", > "Gathering files modified after 2018-06-22 13:10:31.109535881 +0000", > "2018-06-22 13:10:37,568 DEBUG: 32007 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:31.109535881 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/iscsid --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-06-22 13:10:37,568 INFO: 32007 -- Removing container: docker-puppet-iscsid", > "2018-06-22 13:10:37,608 DEBUG: 32007 -- docker-puppet-iscsid", > "2018-06-22 13:10:37,608 INFO: 32007 -- Finished processing puppet configs for iscsid", > "2018-06-22 13:10:39,318 DEBUG: 32009 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.60 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_password]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 0.81 seconds", > " Total: 48", > " Success: 48", > " Total: 174", > " Skipped: 27", > " Out of sync: 48", > " Changed: 48", > " Neutron plugin ml2: 0.03", > " Neutron agent ovs: 0.06", > " Neutron config: 0.51", > " Last run: 1529673038", > " Config retrieval: 2.89", > " Total: 3.49", > " Config: 1529673034", > "Gathering files modified after 2018-06-22 13:10:27.264514351 +0000", > "2018-06-22 13:10:39,319 DEBUG: 32009 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "Warning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Scope(Class[Neutron]): neutron::rabbit_host, neutron::rabbit_hosts, neutron::rabbit_password, neutron::rabbit_port, neutron::rabbit_user, neutron::rabbit_virtual_host and neutron::rpc_backend are deprecated. Please use neutron::default_transport_url instead.", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 530]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/plugins/ml2.pp\", 45]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 132]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 219]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync_srcs+=' /var/www'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:27.264514351 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/neutron --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-06-22 13:10:39,319 INFO: 32009 -- Removing container: docker-puppet-neutron", > "2018-06-22 13:10:39,357 DEBUG: 32009 -- docker-puppet-neutron", > "2018-06-22 13:10:39,357 INFO: 32009 -- Finished processing puppet configs for neutron", > "2018-06-22 13:10:52,261 DEBUG: 32008 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.88 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{md5}056b96e7e8124e1bc55f77cba4e68ce7' to '{md5}a5a5f8a3e1fda6c42681ae00f4ddf02d'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{md5}09c4fa846e8e27bfa3ab3325900d63ea' to '{md5}2f138c0278e1b666ec77a6d8ba3054a1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{md5}dff145cb4e519333c0096aae8de2e77c' to '{md5}0a97037bb44fd64d20c1ae93194fa091'", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/heal_instance_info_cache_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/vncserver_proxyclient_address]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/keymap]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[glance/verify_glance_signatures]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tls]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tcp]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{md5}cfce3c4aa78e4e5b779d7deebcbeb575'", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/vncserver_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_group]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_ro]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_rw]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_ro_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_rw_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}40d961cd3154f0439fcac1a50bd77b96' to '{md5}8f163e7f432aae0a353d7c09f9c0b750'", > "Notice: Applied catalog in 7.60 seconds", > " Total: 103", > " Success: 103", > " Changed: 103", > " Out of sync: 103", > " Total: 313", > " Skipped: 47", > " Concat file: 0.00", > " Concat fragment: 0.00", > " File line: 0.00", > " Exec: 0.01", > " Libvirtd config: 0.02", > " File: 0.03", > " Package: 0.08", > " Augeas: 0.60", > " Total: 10.52", > " Last run: 1529673050", > " Config retrieval: 3.22", > " Nova config: 6.56", > " Config: 1529673040", > "Gathering files modified after 2018-06-22 13:10:35.336559060 +0000", > "2018-06-22 13:10:52,261 DEBUG: 32008 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password'", > "+ origin_of_time=/var/lib/config-data/nova_libvirt.origin_of_time", > "+ touch /var/lib/config-data/nova_libvirt.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 530]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 533]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Unknown variable: '::nova::vncproxy::host'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:31:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_protocol'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:36:5", > "Warning: Unknown variable: '::nova::vncproxy::port'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:41:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_path'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:46:5", > "Warning: Unknown variable: '::nova::compute::pci_passthrough'. at /etc/puppet/modules/nova/manifests/compute/pci.pp:19:38", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/compute/libvirt.pp\", 278]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute/libvirt.pp\", 33]", > " with Stdlib::Compat::Ip_Address. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Exec[set libvirt sasl credentials](provider=posix): Cannot understand environment setting \"TLS_PASSWORD=\"", > "+ rsync_srcs+=' /var/lib/nova/.ssh'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/nova/.ssh /var/lib/config-data/nova_libvirt", > "++ stat -c %y /var/lib/config-data/nova_libvirt.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:35.336559060 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_libvirt", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_libvirt", > "++ find /etc /root /opt /var/spool/cron /var/lib/nova/.ssh -newer /var/lib/config-data/nova_libvirt.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova_libvirt --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova_libvirt --mtime=1970-01-01", > "2018-06-22 13:10:52,261 INFO: 32008 -- Removing container: docker-puppet-nova_libvirt", > "2018-06-22 13:10:52,301 DEBUG: 32008 -- docker-puppet-nova_libvirt", > "2018-06-22 13:10:52,302 INFO: 32008 -- Finished processing puppet configs for nova_libvirt", > "2018-06-22 13:10:52,302 DEBUG: 32006 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-06-22 13:10:52,302 DEBUG: 32006 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-06-22 13:10:52,305 DEBUG: 32006 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:10:52,305 DEBUG: 32006 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:10:52,305 DEBUG: 32006 -- Updating config hash for neutron_ovs_bridge, config_volume=iscsid hash=36fbc1cede03a4eca918dcd53b1c5f14", > "2018-06-22 13:10:52,305 DEBUG: 32006 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-22 13:10:52,305 DEBUG: 32006 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-22 13:10:52,306 DEBUG: 32006 -- Updating config hash for nova_libvirt, config_volume=iscsid hash=c8edc90eec58b7a027b54ddf838ba046", > "2018-06-22 13:10:52,306 DEBUG: 32006 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-22 13:10:52,306 DEBUG: 32006 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-22 13:10:52,306 DEBUG: 32006 -- Updating config hash for nova_virtlogd, config_volume=iscsid hash=c8edc90eec58b7a027b54ddf838ba046", > "2018-06-22 13:10:52,307 DEBUG: 32006 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-22 13:10:52,307 DEBUG: 32006 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-22 13:10:52,307 DEBUG: 32006 -- Updating config hash for ceilometer_agent_compute, config_volume=iscsid hash=6bdd86c68de76bf63e1ff30bd16e16c8", > "2018-06-22 13:10:52,307 DEBUG: 32006 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt/etc", > "2018-06-22 13:10:52,307 DEBUG: 32006 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:10:52,307 DEBUG: 32006 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:10:52,308 DEBUG: 32006 -- Updating config hash for neutron_ovs_agent, config_volume=iscsid hash=36fbc1cede03a4eca918dcd53b1c5f14", > "2018-06-22 13:10:52,308 DEBUG: 32006 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-22 13:10:52,308 DEBUG: 32006 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-06-22 13:10:52,308 DEBUG: 32006 -- Updating config hash for nova_migration_target, config_volume=iscsid hash=c8edc90eec58b7a027b54ddf838ba046", > "2018-06-22 13:10:52,308 DEBUG: 32006 -- Updating config hash for nova_compute, config_volume=iscsid hash=c8edc90eec58b7a027b54ddf838ba046", > "2018-06-22 13:10:52,308 DEBUG: 32006 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-22 13:10:52,308 DEBUG: 32006 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-22 13:10:52,308 DEBUG: 32006 -- Updating config hash for logrotate_crond, config_volume=iscsid hash=2084d7579818ba4b3b36b22728bf408a" > ] >} >2018-06-22 09:12:33,320 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-06-22 13:10:00,187 INFO: 33378 -- Running docker-puppet", > "2018-06-22 13:10:00,187 DEBUG: 33378 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-06-22 13:10:00,188 DEBUG: 33378 -- config_volume aodh", > "2018-06-22 13:10:00,188 DEBUG: 33378 -- puppet_tags aodh_api_paste_ini,aodh_config", > "2018-06-22 13:10:00,188 DEBUG: 33378 -- manifest include tripleo::profile::base::aodh::api", > "", > "include ::tripleo::profile::base::database::mysql::client", > "2018-06-22 13:10:00,188 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-22 13:10:00,188 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,188 DEBUG: 33378 -- Adding new service", > "2018-06-22 13:10:00,188 DEBUG: 33378 -- puppet_tags aodh_config", > "2018-06-22 13:10:00,188 DEBUG: 33378 -- manifest include tripleo::profile::base::aodh::evaluator", > "2018-06-22 13:10:00,188 DEBUG: 33378 -- Existing service, appending puppet tags and manifest", > "2018-06-22 13:10:00,189 DEBUG: 33378 -- puppet_tags aodh_config", > "2018-06-22 13:10:00,189 DEBUG: 33378 -- manifest include tripleo::profile::base::aodh::listener", > "2018-06-22 13:10:00,189 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-22 13:10:00,189 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,189 DEBUG: 33378 -- Existing service, appending puppet tags and manifest", > "2018-06-22 13:10:00,189 DEBUG: 33378 -- config_volume aodh", > "2018-06-22 13:10:00,189 DEBUG: 33378 -- manifest include tripleo::profile::base::aodh::notifier", > "2018-06-22 13:10:00,189 DEBUG: 33378 -- config_volume ceilometer", > "2018-06-22 13:10:00,189 DEBUG: 33378 -- puppet_tags ceilometer_config", > "2018-06-22 13:10:00,189 DEBUG: 33378 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-06-22 13:10:00,189 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 13:10:00,189 DEBUG: 33378 -- Adding new service", > "2018-06-22 13:10:00,190 DEBUG: 33378 -- manifest include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-06-22 13:10:00,190 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 13:10:00,190 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,190 DEBUG: 33378 -- Existing service, appending puppet tags and manifest", > "2018-06-22 13:10:00,190 DEBUG: 33378 -- config_volume cinder", > "2018-06-22 13:10:00,190 DEBUG: 33378 -- puppet_tags cinder_config,file,concat,file_line", > "2018-06-22 13:10:00,190 DEBUG: 33378 -- manifest include ::tripleo::profile::base::cinder::api", > "2018-06-22 13:10:00,190 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-22 13:10:00,190 DEBUG: 33378 -- Adding new service", > "2018-06-22 13:10:00,190 DEBUG: 33378 -- manifest include ::tripleo::profile::base::cinder::backup::ceph", > "2018-06-22 13:10:00,190 DEBUG: 33378 -- manifest include ::tripleo::profile::base::cinder::scheduler", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- Existing service, appending puppet tags and manifest", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- config_volume cinder", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- puppet_tags cinder_config,file,concat,file_line", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- manifest include ::tripleo::profile::base::lvm", > "include ::tripleo::profile::base::cinder::volume", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- config_volume clustercheck", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- puppet_tags file", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- Adding new service", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- config_volume glance_api", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- puppet_tags glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- manifest include ::tripleo::profile::base::glance::api", > "2018-06-22 13:10:00,191 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-22 13:10:00,192 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,192 DEBUG: 33378 -- Adding new service", > "2018-06-22 13:10:00,192 DEBUG: 33378 -- config_volume gnocchi", > "2018-06-22 13:10:00,192 DEBUG: 33378 -- puppet_tags gnocchi_api_paste_ini,gnocchi_config", > "2018-06-22 13:10:00,192 DEBUG: 33378 -- manifest include ::tripleo::profile::base::gnocchi::api", > "2018-06-22 13:10:00,192 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-22 13:10:00,192 DEBUG: 33378 -- puppet_tags gnocchi_config", > "2018-06-22 13:10:00,192 DEBUG: 33378 -- manifest include ::tripleo::profile::base::gnocchi::metricd", > "2018-06-22 13:10:00,192 DEBUG: 33378 -- Existing service, appending puppet tags and manifest", > "2018-06-22 13:10:00,192 DEBUG: 33378 -- manifest include ::tripleo::profile::base::gnocchi::statsd", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- Existing service, appending puppet tags and manifest", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- config_volume haproxy", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- puppet_tags haproxy_config", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "class tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}", > "['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::pacemaker::haproxy_bundle", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- Adding new service", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- config_volume heat_api", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- puppet_tags heat_config,file,concat,file_line", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- manifest include ::tripleo::profile::base::heat::api", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- config_volume heat_api_cfn", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-06-22 13:10:00,193 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- Adding new service", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- config_volume heat", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- puppet_tags heat_config,file,concat,file_line", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- config_volume horizon", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- puppet_tags horizon_config", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- manifest include ::tripleo::profile::base::horizon", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- config_volume iscsid", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- puppet_tags iscsid_config", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 13:10:00,194 DEBUG: 33378 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-22 13:10:00,195 DEBUG: 33378 -- config_volume keystone", > "2018-06-22 13:10:00,195 DEBUG: 33378 -- puppet_tags keystone_config,keystone_domain_config", > "2018-06-22 13:10:00,195 DEBUG: 33378 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::keystone", > "2018-06-22 13:10:00,195 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-22 13:10:00,195 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,195 DEBUG: 33378 -- Adding new service", > "2018-06-22 13:10:00,195 DEBUG: 33378 -- config_volume memcached", > "2018-06-22 13:10:00,195 DEBUG: 33378 -- puppet_tags file", > "2018-06-22 13:10:00,195 DEBUG: 33378 -- manifest include ::tripleo::profile::base::memcached", > "2018-06-22 13:10:00,195 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-22 13:10:00,195 DEBUG: 33378 -- config_volume mysql", > "2018-06-22 13:10:00,195 DEBUG: 33378 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "exec {'wait-for-settle': command => '/bin/true' }", > "include ::tripleo::profile::pacemaker::database::mysql_bundle", > "2018-06-22 13:10:00,195 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 13:10:00,195 DEBUG: 33378 -- config_volume neutron", > "2018-06-22 13:10:00,196 DEBUG: 33378 -- puppet_tags neutron_config,neutron_api_config", > "2018-06-22 13:10:00,196 DEBUG: 33378 -- manifest include tripleo::profile::base::neutron::server", > "2018-06-22 13:10:00,196 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 13:10:00,196 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,196 DEBUG: 33378 -- Adding new service", > "2018-06-22 13:10:00,196 DEBUG: 33378 -- config_volume neutron", > "2018-06-22 13:10:00,196 DEBUG: 33378 -- puppet_tags neutron_plugin_ml2", > "2018-06-22 13:10:00,196 DEBUG: 33378 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-06-22 13:10:00,196 DEBUG: 33378 -- Existing service, appending puppet tags and manifest", > "2018-06-22 13:10:00,196 DEBUG: 33378 -- puppet_tags neutron_config,neutron_dhcp_agent_config", > "2018-06-22 13:10:00,196 DEBUG: 33378 -- manifest include tripleo::profile::base::neutron::dhcp", > "2018-06-22 13:10:00,196 DEBUG: 33378 -- puppet_tags neutron_config,neutron_l3_agent_config", > "2018-06-22 13:10:00,197 DEBUG: 33378 -- manifest include tripleo::profile::base::neutron::l3", > "2018-06-22 13:10:00,197 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 13:10:00,197 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,197 DEBUG: 33378 -- Existing service, appending puppet tags and manifest", > "2018-06-22 13:10:00,197 DEBUG: 33378 -- config_volume neutron", > "2018-06-22 13:10:00,197 DEBUG: 33378 -- puppet_tags neutron_config,neutron_metadata_agent_config", > "2018-06-22 13:10:00,197 DEBUG: 33378 -- manifest include tripleo::profile::base::neutron::metadata", > "2018-06-22 13:10:00,197 DEBUG: 33378 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-22 13:10:00,197 DEBUG: 33378 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-06-22 13:10:00,197 DEBUG: 33378 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-22 13:10:00,197 DEBUG: 33378 -- config_volume nova", > "2018-06-22 13:10:00,197 DEBUG: 33378 -- puppet_tags nova_config", > "2018-06-22 13:10:00,197 DEBUG: 33378 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::api", > "2018-06-22 13:10:00,198 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-22 13:10:00,198 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,198 DEBUG: 33378 -- Adding new service", > "2018-06-22 13:10:00,198 DEBUG: 33378 -- config_volume nova", > "2018-06-22 13:10:00,198 DEBUG: 33378 -- puppet_tags nova_config", > "2018-06-22 13:10:00,198 DEBUG: 33378 -- manifest include tripleo::profile::base::nova::conductor", > "2018-06-22 13:10:00,198 DEBUG: 33378 -- Existing service, appending puppet tags and manifest", > "2018-06-22 13:10:00,198 DEBUG: 33378 -- manifest include tripleo::profile::base::nova::consoleauth", > "2018-06-22 13:10:00,198 DEBUG: 33378 -- config_volume nova_placement", > "2018-06-22 13:10:00,198 DEBUG: 33378 -- manifest include tripleo::profile::base::nova::placement", > "2018-06-22 13:10:00,198 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-22 13:10:00,199 DEBUG: 33378 -- Adding new service", > "2018-06-22 13:10:00,199 DEBUG: 33378 -- config_volume nova", > "2018-06-22 13:10:00,199 DEBUG: 33378 -- puppet_tags nova_config", > "2018-06-22 13:10:00,199 DEBUG: 33378 -- manifest include tripleo::profile::base::nova::scheduler", > "2018-06-22 13:10:00,199 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-22 13:10:00,199 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,199 DEBUG: 33378 -- Existing service, appending puppet tags and manifest", > "2018-06-22 13:10:00,199 DEBUG: 33378 -- manifest include tripleo::profile::base::nova::vncproxy", > "2018-06-22 13:10:00,199 DEBUG: 33378 -- config_volume crond", > "2018-06-22 13:10:00,199 DEBUG: 33378 -- puppet_tags ", > "2018-06-22 13:10:00,199 DEBUG: 33378 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-22 13:10:00,199 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:10:00,199 DEBUG: 33378 -- config_volume panko", > "2018-06-22 13:10:00,199 DEBUG: 33378 -- puppet_tags panko_api_paste_ini,panko_config", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- manifest include tripleo::profile::base::panko::api", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- Adding new service", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- config_volume rabbitmq", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- puppet_tags file", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::rabbitmq", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- config_volume redis", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- puppet_tags exec", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- config_volume sahara", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- puppet_tags sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- manifest include ::tripleo::profile::base::sahara::api", > "2018-06-22 13:10:00,200 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-22 13:10:00,201 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,201 DEBUG: 33378 -- Adding new service", > "2018-06-22 13:10:00,201 DEBUG: 33378 -- config_volume sahara", > "2018-06-22 13:10:00,201 DEBUG: 33378 -- puppet_tags sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-06-22 13:10:00,201 DEBUG: 33378 -- manifest include ::tripleo::profile::base::sahara::engine", > "2018-06-22 13:10:00,201 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-22 13:10:00,201 DEBUG: 33378 -- Existing service, appending puppet tags and manifest", > "2018-06-22 13:10:00,201 DEBUG: 33378 -- config_volume swift", > "2018-06-22 13:10:00,201 DEBUG: 33378 -- puppet_tags swift_config,swift_proxy_config,swift_keymaster_config", > "2018-06-22 13:10:00,201 DEBUG: 33378 -- manifest include ::tripleo::profile::base::swift::proxy", > "2018-06-22 13:10:00,201 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 13:10:00,201 DEBUG: 33378 -- config_volume swift_ringbuilder", > "2018-06-22 13:10:00,201 DEBUG: 33378 -- puppet_tags exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-06-22 13:10:00,201 DEBUG: 33378 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-06-22 13:10:00,202 DEBUG: 33378 -- volumes []", > "2018-06-22 13:10:00,202 DEBUG: 33378 -- Adding new service", > "2018-06-22 13:10:00,202 DEBUG: 33378 -- config_volume swift", > "2018-06-22 13:10:00,202 DEBUG: 33378 -- puppet_tags swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-06-22 13:10:00,202 DEBUG: 33378 -- manifest include ::tripleo::profile::base::swift::storage", > "class xinetd() {}", > "2018-06-22 13:10:00,202 DEBUG: 33378 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 13:10:00,202 DEBUG: 33378 -- Existing service, appending puppet tags and manifest", > "2018-06-22 13:10:00,202 INFO: 33378 -- Service compilation completed.", > "2018-06-22 13:10:00,203 DEBUG: 33378 -- - [u'nova_placement', u'file,file_line,concat,augeas,cron,nova_config', u'include tripleo::profile::base::nova::placement\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', []]", > "2018-06-22 13:10:00,203 DEBUG: 33378 -- - [u'aodh', u'file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config', u'include tripleo::profile::base::aodh::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::evaluator\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::listener\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::notifier\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', []]", > "2018-06-22 13:10:00,203 DEBUG: 33378 -- - [u'heat_api', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', []]", > "2018-06-22 13:10:00,203 DEBUG: 33378 -- - [u'swift_ringbuilder', u'file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', u'include ::tripleo::profile::base::swift::ringbuilder', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', []]", > "2018-06-22 13:10:00,203 DEBUG: 33378 -- - [u'sahara', u'file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template', u'include ::tripleo::profile::base::sahara::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sahara::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', []]", > "2018-06-22 13:10:00,203 DEBUG: 33378 -- - [u'mysql', u'file,file_line,concat,augeas,cron,file', u\"['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\\nexec {'wait-for-settle': command => '/bin/true' }\\ninclude ::tripleo::profile::pacemaker::database::mysql_bundle\", u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', []]", > "2018-06-22 13:10:00,203 DEBUG: 33378 -- - [u'gnocchi', u'file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config', u'include ::tripleo::profile::base::gnocchi::api\\n\\ninclude ::tripleo::profile::base::gnocchi::metricd\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::gnocchi::statsd\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', []]", > "2018-06-22 13:10:00,203 DEBUG: 33378 -- - [u'clustercheck', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::pacemaker::clustercheck', u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', []]", > "2018-06-22 13:10:00,203 DEBUG: 33378 -- - [u'redis', u'file,file_line,concat,augeas,cron,exec', u'include ::tripleo::profile::pacemaker::database::redis_bundle', u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', []]", > "2018-06-22 13:10:00,203 DEBUG: 33378 -- - [u'nova', u'file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config', u\"['Nova_cell_v2'].each |String $val| { noop_resource($val) }\\ninclude tripleo::profile::base::nova::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::conductor\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::consoleauth\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::vncproxy\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', []]", > "2018-06-22 13:10:00,203 DEBUG: 33378 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', [u'/etc/iscsi:/etc/iscsi']]", > "2018-06-22 13:10:00,203 DEBUG: 33378 -- - [u'glance_api', u'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config', u'include ::tripleo::profile::base::glance::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', []]", > "2018-06-22 13:10:00,203 DEBUG: 33378 -- - [u'keystone', u'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', u\"['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::keystone\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', []]", > "2018-06-22 13:10:00,203 DEBUG: 33378 -- - [u'memcached', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::base::memcached\\n', u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', []]", > "2018-06-22 13:10:00,204 DEBUG: 33378 -- - [u'panko', u'file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config', u'include tripleo::profile::base::panko::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', []]", > "2018-06-22 13:10:00,204 DEBUG: 33378 -- - [u'heat', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', []]", > "2018-06-22 13:10:00,204 DEBUG: 33378 -- - [u'cinder', u'file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', u'include ::tripleo::profile::base::cinder::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::backup::ceph\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::lvm\\ninclude ::tripleo::profile::base::cinder::volume\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', []]", > "2018-06-22 13:10:00,204 DEBUG: 33378 -- - [u'swift', u'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', u'include ::tripleo::profile::base::swift::proxy\\n\\ninclude ::tripleo::profile::base::swift::storage\\n\\nclass xinetd() {}', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', []]", > "2018-06-22 13:10:00,204 DEBUG: 33378 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', []]", > "2018-06-22 13:10:00,204 DEBUG: 33378 -- - [u'haproxy', u'file,file_line,concat,augeas,cron,haproxy_config', u\"exec {'wait-for-settle': command => '/bin/true' }\\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::pacemaker::haproxy_bundle\", u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']]", > "2018-06-22 13:10:00,204 DEBUG: 33378 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n\\ninclude ::tripleo::profile::base::ceilometer::agent::notification\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', []]", > "2018-06-22 13:10:00,204 DEBUG: 33378 -- - [u'rabbitmq', u'file,file_line,concat,augeas,cron,file', u\"['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::rabbitmq\\n\", u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', []]", > "2018-06-22 13:10:00,204 DEBUG: 33378 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include tripleo::profile::base::neutron::server\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude tripleo::profile::base::neutron::dhcp\\n\\ninclude tripleo::profile::base::neutron::l3\\n\\ninclude tripleo::profile::base::neutron::metadata\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']]", > "2018-06-22 13:10:00,204 DEBUG: 33378 -- - [u'horizon', u'file,file_line,concat,augeas,cron,horizon_config', u'include ::tripleo::profile::base::horizon\\n', u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', []]", > "2018-06-22 13:10:00,204 DEBUG: 33378 -- - [u'heat_api_cfn', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api_cfn\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', []]", > "2018-06-22 13:10:00,204 INFO: 33378 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-06-22 13:10:00,218 INFO: 33379 -- Starting configuration of nova_placement using image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-22 13:10:00,219 DEBUG: 33379 -- config_volume nova_placement", > "2018-06-22 13:10:00,218 INFO: 33380 -- Starting configuration of swift_ringbuilder using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 13:10:00,219 DEBUG: 33379 -- puppet_tags file,file_line,concat,augeas,cron,nova_config", > "2018-06-22 13:10:00,219 DEBUG: 33380 -- config_volume swift_ringbuilder", > "2018-06-22 13:10:00,219 DEBUG: 33379 -- manifest include tripleo::profile::base::nova::placement", > "2018-06-22 13:10:00,219 DEBUG: 33380 -- puppet_tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-06-22 13:10:00,219 DEBUG: 33379 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-22 13:10:00,219 DEBUG: 33380 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-06-22 13:10:00,219 DEBUG: 33379 -- volumes []", > "2018-06-22 13:10:00,219 DEBUG: 33380 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 13:10:00,219 DEBUG: 33380 -- volumes []", > "2018-06-22 13:10:00,219 INFO: 33381 -- Starting configuration of gnocchi using image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-22 13:10:00,220 DEBUG: 33381 -- config_volume gnocchi", > "2018-06-22 13:10:00,220 DEBUG: 33381 -- puppet_tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config", > "2018-06-22 13:10:00,220 DEBUG: 33381 -- manifest include ::tripleo::profile::base::gnocchi::api", > "include ::tripleo::profile::base::gnocchi::metricd", > "include ::tripleo::profile::base::gnocchi::statsd", > "2018-06-22 13:10:00,220 DEBUG: 33381 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-22 13:10:00,220 DEBUG: 33381 -- volumes []", > "2018-06-22 13:10:00,222 INFO: 33379 -- Removing container: docker-puppet-nova_placement", > "2018-06-22 13:10:00,222 INFO: 33380 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-06-22 13:10:00,222 INFO: 33381 -- Removing container: docker-puppet-gnocchi", > "2018-06-22 13:10:00,307 INFO: 33380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 13:10:00,308 INFO: 33379 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-22 13:10:00,309 INFO: 33381 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-22 13:10:19,477 DEBUG: 33380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server", > "e0f71f706c2a: Pulling fs layer", > "121ab4741000: Pulling fs layer", > "a8ff0031dfcb: Pulling fs layer", > "c66228eb2ac7: Pulling fs layer", > "a98c7da29d65: Pulling fs layer", > "c4603b657b73: Pulling fs layer", > "a98c7da29d65: Waiting", > "c4603b657b73: Waiting", > "c66228eb2ac7: Waiting", > "121ab4741000: Verifying Checksum", > "121ab4741000: Download complete", > "c66228eb2ac7: Verifying Checksum", > "c66228eb2ac7: Download complete", > "a8ff0031dfcb: Verifying Checksum", > "a8ff0031dfcb: Download complete", > "a98c7da29d65: Verifying Checksum", > "a98c7da29d65: Download complete", > "e0f71f706c2a: Verifying Checksum", > "e0f71f706c2a: Download complete", > "c4603b657b73: Verifying Checksum", > "c4603b657b73: Download complete", > "e0f71f706c2a: Pull complete", > "121ab4741000: Pull complete", > "a8ff0031dfcb: Pull complete", > "c66228eb2ac7: Pull complete", > "a98c7da29d65: Pull complete", > "c4603b657b73: Pull complete", > "Digest: sha256:632f29598f1ea7b96a5573d0b5a942b3a1f571783804cdc07dac0910e97d1a87", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 13:10:19,480 DEBUG: 33380 -- NET_HOST enabled", > "2018-06-22 13:10:19,480 DEBUG: 33380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift_ringbuilder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball --env NAME=swift_ringbuilder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpw7f6pI:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 13:10:24,008 DEBUG: 33379 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-placement-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-placement-api", > "0e3031608420: Pulling fs layer", > "dd9c4679b681: Pulling fs layer", > "0e3031608420: Waiting", > "dd9c4679b681: Waiting", > "dd9c4679b681: Verifying Checksum", > "dd9c4679b681: Download complete", > "0e3031608420: Verifying Checksum", > "0e3031608420: Download complete", > "0e3031608420: Pull complete", > "dd9c4679b681: Pull complete", > "Digest: sha256:2336d644bd74c35fe7e050376f6d7a1b718ae6faf3556cf63917aceecdf581b6", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-22 13:10:24,013 DEBUG: 33379 -- NET_HOST enabled", > "2018-06-22 13:10:24,013 DEBUG: 33379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_placement --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config --env NAME=nova_placement --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpGsYCSb:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", > "2018-06-22 13:10:26,694 DEBUG: 33381 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-gnocchi-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-gnocchi-api", > "64612d8109ce: Pulling fs layer", > "2d8b51759f9c: Pulling fs layer", > "64612d8109ce: Waiting", > "2d8b51759f9c: Waiting", > "2d8b51759f9c: Verifying Checksum", > "2d8b51759f9c: Download complete", > "64612d8109ce: Verifying Checksum", > "64612d8109ce: Download complete", > "64612d8109ce: Pull complete", > "2d8b51759f9c: Pull complete", > "Digest: sha256:0824e3fa2c22ac0acb43883a29cce2fbdf54a9cce722e559cc5c6325e46c2142", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-22 13:10:26,697 DEBUG: 33381 -- NET_HOST enabled", > "2018-06-22 13:10:26,698 DEBUG: 33381 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-gnocchi --env PUPPET_TAGS=file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config --env NAME=gnocchi --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpnF6X7m:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", > "2018-06-22 13:10:34,442 DEBUG: 33380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.09 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[fetch_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Swift/File[/var/lib/swift]/group: group changed 'root' to 'swift'", > "Notice: /Stage[main]/Swift/File[/etc/swift/swift.conf]/owner: owner changed 'root' to 'swift'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[object]/Exec[create_object]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[account]/Exec[create_account]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[container]/Exec[create_container]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.17:%PORT%/d1]/Ring_object_device[172.17.4.17:6000/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.17:%PORT%/d1]/Ring_container_device[172.17.4.17:6001/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.17:%PORT%/d1]/Ring_account_device[172.17.4.17:6002/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[object]/Exec[rebalance_object]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[account]/Exec[rebalance_account]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[container]/Exec[rebalance_container]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]: Triggered 'refresh' from 3 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[upload_swift_ring_tarball]: Triggered 'refresh' from 2 events", > "Notice: Applied catalog in 4.62 seconds", > "Changes:", > " Total: 11", > "Events:", > " Success: 11", > "Resources:", > " Changed: 11", > " Out of sync: 11", > " Skipped: 19", > " Total: 36", > " Restarted: 6", > "Time:", > " File: 0.01", > " Ring object device: 0.54", > " Ring account device: 0.60", > " Ring container device: 0.60", > " Config retrieval: 1.26", > " Exec: 1.33", > " Last run: 1529673033", > " Total: 4.32", > "Version:", > " Config: 1529673027", > " Puppet: 4.8.2", > "Gathering files modified after 2018-06-22 13:10:19.771406277 +0000", > "2018-06-22 13:10:34,443 DEBUG: 33380 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball'", > "+ origin_of_time=/var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ touch /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=controller-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball /etc/config.pp", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'swift' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/ringbuilder.pp\", 113]:[\"/etc/config.pp\", 2]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/ringbuilder/create.pp\", 44]:", > "Warning: Unexpected line: Ring file /etc/swift/object.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta", > "Warning: Unexpected line: There are no devices in this ring, or all devices have been deleted", > "Warning: Unexpected line: Ring file /etc/swift/container.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Ring file /etc/swift/account.ring.gz not found, probably it hasn't been written yet", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ rsync_srcs+=' /var/www'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift_ringbuilder", > "++ stat -c %y /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:19.771406277 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift_ringbuilder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift_ringbuilder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift_ringbuilder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/swift_ringbuilder --mtime=1970-01-01", > "+ md5sum", > "+ awk '{print $1}'", > "tar: Removing leading `/' from member names", > "+ tar -c -f - /var/lib/config-data/puppet-generated/swift_ringbuilder --mtime=1970-01-01", > "2018-06-22 13:10:34,443 INFO: 33380 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-06-22 13:10:34,498 DEBUG: 33380 -- docker-puppet-swift_ringbuilder", > "2018-06-22 13:10:34,498 INFO: 33380 -- Finished processing puppet configs for swift_ringbuilder", > "2018-06-22 13:10:34,499 INFO: 33380 -- Starting configuration of sahara using image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-22 13:10:34,499 DEBUG: 33380 -- config_volume sahara", > "2018-06-22 13:10:34,500 DEBUG: 33380 -- puppet_tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-06-22 13:10:34,500 DEBUG: 33380 -- manifest include ::tripleo::profile::base::sahara::api", > "include ::tripleo::profile::base::sahara::engine", > "2018-06-22 13:10:34,500 DEBUG: 33380 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-22 13:10:34,500 DEBUG: 33380 -- volumes []", > "2018-06-22 13:10:34,500 INFO: 33380 -- Removing container: docker-puppet-sahara", > "2018-06-22 13:10:34,571 INFO: 33380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-22 13:10:37,076 DEBUG: 33380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-sahara-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-sahara-api", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "6c5f7e9a0fe8: Pulling fs layer", > "5f67eb984180: Pulling fs layer", > "5f67eb984180: Verifying Checksum", > "5f67eb984180: Download complete", > "6c5f7e9a0fe8: Verifying Checksum", > "6c5f7e9a0fe8: Download complete", > "6c5f7e9a0fe8: Pull complete", > "5f67eb984180: Pull complete", > "Digest: sha256:702a41a4d211978832441c041a232227b3d2484d71ef01a8bf7d5332091587a5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-22 13:10:37,079 DEBUG: 33380 -- NET_HOST enabled", > "2018-06-22 13:10:37,080 DEBUG: 33380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-sahara --env PUPPET_TAGS=file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template --env NAME=sahara --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpmBiE17:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", > "2018-06-22 13:10:38,847 DEBUG: 33381 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.78 seconds", > "Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{md5}9da85e58f3bd6c780ce76db603b7f028'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/File[mime_magic.conf]/ensure: defined content as '{md5}b258529b332429e2ff8344f726a95457'", > "Notice: /Stage[main]/Apache::Mod::Alias/File[alias.conf]/ensure: defined content as '{md5}983e865be85f5e0daaed7433db82995e'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/File[autoindex.conf]/ensure: defined content as '{md5}2421a3c6df32c7e38c2a7a22afdf5728'", > "Notice: /Stage[main]/Apache::Mod::Deflate/File[deflate.conf]/ensure: defined content as '{md5}a045d750d819b1e9dae3fbfb3f20edd5'", > "Notice: /Stage[main]/Apache::Mod::Dir/File[dir.conf]/ensure: defined content as '{md5}c741d8ea840e6eb999d739eed47c69d7'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/File[negotiation.conf]/ensure: defined content as '{md5}47284b5580b986a6ba32580b6ffb9fd7'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{md5}c7ede4173da1915b7ec088201f030c28'", > "Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{md5}f58b0483b70b4e73b5f67ff37b8f24a0'", > "Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{md5}fa95c477a2085c1f7f17ee5f8eccfb90'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Gnocchi::Db/Gnocchi_config[indexer/url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/auth_mode]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage/Gnocchi_config[storage/coordination_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/redis_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_keyring]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_pool]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_conffile]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/workers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/metric_processing_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/resource_id]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/archive_policy_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/flush_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Policy/Oslo::Policy[gnocchi_config]/Gnocchi_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Oslo::Middleware[gnocchi_config]/Gnocchi_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}3cb292a5545de9f30e5168d05f41a649'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{md5}c6d1bc1fdbcb93bbd2596e4703f4108c' to '{md5}ac42062d69afa9d2671492ce0be87b7b'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{md5}785d35cb285e190d589163b45263ca89'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{md5}26e5d44aae258b3e9d821cbbbd3e2826'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{md5}0e8468ecc1265f8947b8725f4d1be9c0'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{md5}d1045f54d2798499ca0f030ca0eef920'", > "Notice: /Stage[main]/Apache::Mod::Actions/Apache::Mod[actions]/File[actions.load]/ensure: defined content as '{md5}599866dfaf734f60f7e2d41ee8235515'", > "Notice: /Stage[main]/Apache::Mod::Authn_core/Apache::Mod[authn_core]/File[authn_core.load]/ensure: defined content as '{md5}704d6e8b02b0eca0eba4083960d16c52'", > "Notice: /Stage[main]/Apache::Mod::Cache/Apache::Mod[cache]/File[cache.load]/ensure: defined content as '{md5}01e4d392225b518a65b0f7d6c4e21d29'", > "Notice: /Stage[main]/Apache::Mod::Ext_filter/Apache::Mod[ext_filter]/File[ext_filter.load]/ensure: defined content as '{md5}76d5e0ac3411a4be57ac33ebe2e52ac8'", > "Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{md5}e36257b9efab01459141d423cae57c7c'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/Apache::Mod[mime_magic]/File[mime_magic.load]/ensure: defined content as '{md5}cb8670bb2fb352aac7ebf3a85d52094c'", > "Notice: /Stage[main]/Apache::Mod::Rewrite/Apache::Mod[rewrite]/File[rewrite.load]/ensure: defined content as '{md5}26e2683352fc1599f29573ff0d934e79'", > "Notice: /Stage[main]/Apache::Mod::Speling/Apache::Mod[speling]/File[speling.load]/ensure: defined content as '{md5}f82e9e6b871a276c324c9eeffcec8a61'", > "Notice: /Stage[main]/Apache::Mod::Suexec/Apache::Mod[suexec]/File[suexec.load]/ensure: defined content as '{md5}c7d5c61c534ba423a79b0ae78ff9be35'", > "Notice: /Stage[main]/Apache::Mod::Version/Apache::Mod[version]/File[version.load]/ensure: defined content as '{md5}1c9243de22ace4dc8266442c48ae0c92'", > "Notice: /Stage[main]/Apache::Mod::Vhost_alias/Apache::Mod[vhost_alias]/File[vhost_alias.load]/ensure: defined content as '{md5}eca907865997d50d5130497665c3f82e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_digest]/File[auth_digest.load]/ensure: defined content as '{md5}df9e85f8da0b239fe8e698ae7ead4f60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_anon]/File[authn_anon.load]/ensure: defined content as '{md5}bf57b94b5aec35476fc2a2dc3861f132'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_dbm]/File[authn_dbm.load]/ensure: defined content as '{md5}90ee8f8ef1a017cacadfda4225e10651'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_dbm]/File[authz_dbm.load]/ensure: defined content as '{md5}c1363277984d22f99b70f7dce8753b60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_owner]/File[authz_owner.load]/ensure: defined content as '{md5}f30a9be1016df87f195449d9e02d1857'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[expires]/File[expires.load]/ensure: defined content as '{md5}f0825bad1e470de86ffabeb86dcc5d95'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[include]/File[include.load]/ensure: defined content as '{md5}88095a914eedc3c2c184dd5d74c3954c'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[logio]/File[logio.load]/ensure: defined content as '{md5}084533c7a44e9129d0e6df952e2472b6'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[substitute]/File[substitute.load]/ensure: defined content as '{md5}8077c34a71afcf41c8fc644830935915'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[usertrack]/File[usertrack.load]/ensure: defined content as '{md5}e95fbbf030fabec98b948f8dc217775c'", > "Notice: /Stage[main]/Apache::Mod::Alias/Apache::Mod[alias]/File[alias.load]/ensure: defined content as '{md5}3cf2fa309ccae4c29a4b875d0894cd79'", > "Notice: /Stage[main]/Apache::Mod::Authn_file/Apache::Mod[authn_file]/File[authn_file.load]/ensure: defined content as '{md5}d41656680003d7b890267bb73621c60b'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/Apache::Mod[autoindex]/File[autoindex.load]/ensure: defined content as '{md5}515cdf5b573e961a60d2931d39248648'", > "Notice: /Stage[main]/Apache::Mod::Dav/Apache::Mod[dav]/File[dav.load]/ensure: defined content as '{md5}588e496251838c4840c14b28b5aa7881'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/File[dav_fs.conf]/ensure: defined content as '{md5}899a57534f3d84efa81887ec93c90c9b'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/Apache::Mod[dav_fs]/File[dav_fs.load]/ensure: defined content as '{md5}2996277c73b1cd684a9a3111c355e0d3'", > "Notice: /Stage[main]/Apache::Mod::Deflate/Apache::Mod[deflate]/File[deflate.load]/ensure: defined content as '{md5}2d1a1afcae0c70557251829a8586eeaf'", > "Notice: /Stage[main]/Apache::Mod::Dir/Apache::Mod[dir]/File[dir.load]/ensure: defined content as '{md5}1bfb1c2a46d7351fc9eb47c659dee068'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/Apache::Mod[negotiation]/File[negotiation.load]/ensure: defined content as '{md5}d262ee6a5f20d9dd7f87770638dc2ccd'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{md5}ec6c99f7cc8e35bdbcf8028f652c9f6d'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_basic]/File[auth_basic.load]/ensure: defined content as '{md5}494bcf4b843f7908675d663d8dc1bdc8'", > "Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{md5}66a1e2064a140c3e7dca7ac33877700e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{md5}39942569bff2abdb259f9a347c7246bc'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[access_compat]/File[access_compat.load]/ensure: defined content as '{md5}d5feb88bec4570e2dbc41cce7e0de003'", > "Notice: /Stage[main]/Apache::Mod::Authz_user/Apache::Mod[authz_user]/File[authz_user.load]/ensure: defined content as '{md5}63594303ee808423679b1ea13dd5a784'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_groupfile]/File[authz_groupfile.load]/ensure: defined content as '{md5}ae005a36b3ac8c20af36c434561c8a75'", > "Notice: /Stage[main]/Apache::Mod::Env/Apache::Mod[env]/File[env.load]/ensure: defined content as '{md5}d74184d40d0ee24ba02626a188ee7e1a'", > "Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{md5}157529aafcf03fa491bc924103e4608e'", > "Notice: /Stage[main]/Apache::Mod::Cgi/Apache::Mod[cgi]/File[cgi.load]/ensure: defined content as '{md5}ac20c5c5779b37ab06b480d6485a0881'", > "Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{md5}c7726ef20347ef9a06ef68eeaad79765'", > "Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{md5}e282ac9f82fe5538692a4de3616fb695'", > "Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{md5}ab31a6ea611785f74851b578572e4157'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d/httpd.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed", > "Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/content: content changed '{md5}9e163ce201541f8aa36fcc1a372ed34d' to '{md5}b6f6f2773db25c777f1db887e7a3f57d'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{md5}8b3feb3fc2563de439920bb2c52cbd11'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{md5}e1795e051e7aae1f865fde0d3b86a507'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-lua.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi.conf]/ensure: removed", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[/var/www/cgi-bin/gnocchi]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[gnocchi_wsgi]/ensure: defined content as '{md5}c03530dd30d25ec70b705e0c2f43df7a'", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/Apache::Vhost[gnocchi_wsgi]/Concat[10-gnocchi_wsgi.conf]/File[/etc/httpd/conf.d/10-gnocchi_wsgi.conf]/ensure: defined content as '{md5}1524f118b98bfea9814025b4dfb8fc4a'", > "Notice: Applied catalog in 1.11 seconds", > " Total: 110", > " Success: 110", > " Changed: 110", > " Out of sync: 110", > " Total: 253", > " Skipped: 42", > " Concat file: 0.00", > " Anchor: 0.00", > " Concat fragment: 0.00", > " Augeas: 0.02", > " Gnocchi config: 0.25", > " File: 0.28", > " Last run: 1529673037", > " Config retrieval: 4.35", > " Total: 4.90", > " Resources: 0.00", > " Config: 1529673031", > "Gathering files modified after 2018-06-22 13:10:26.918430803 +0000", > "2018-06-22 13:10:38,848 DEBUG: 33381 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config'", > "+ origin_of_time=/var/lib/config-data/gnocchi.origin_of_time", > "+ touch /var/lib/config-data/gnocchi.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config /etc/config.pp", > "Warning: ModuleLoader: module 'gnocchi' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/db.pp\", 26]:[\"/etc/puppet/modules/gnocchi/manifests/init.pp\", 54]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/config.pp\", 29]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/gnocchi.pp\", 31]", > "Warning: Scope(Class[Gnocchi::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'keystone' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/gnocchi", > "++ stat -c %y /var/lib/config-data/gnocchi.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:26.918430803 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/gnocchi", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/gnocchi", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/gnocchi.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/gnocchi --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/gnocchi --mtime=1970-01-01", > "2018-06-22 13:10:38,848 INFO: 33381 -- Removing container: docker-puppet-gnocchi", > "2018-06-22 13:10:38,897 DEBUG: 33381 -- docker-puppet-gnocchi", > "2018-06-22 13:10:38,897 INFO: 33381 -- Finished processing puppet configs for gnocchi", > "2018-06-22 13:10:38,897 INFO: 33381 -- Starting configuration of clustercheck using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 13:10:38,897 DEBUG: 33381 -- config_volume clustercheck", > "2018-06-22 13:10:38,897 DEBUG: 33381 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-22 13:10:38,897 DEBUG: 33381 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-06-22 13:10:38,897 DEBUG: 33381 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 13:10:38,898 DEBUG: 33381 -- volumes []", > "2018-06-22 13:10:38,898 INFO: 33381 -- Removing container: docker-puppet-clustercheck", > "2018-06-22 13:10:38,960 INFO: 33381 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 13:10:43,441 DEBUG: 33379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.43 seconds", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/memcached_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}37ed0de7c9ebb4682f22584b78bf1bc4'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/File[/etc/httpd/conf.d/00-nova-placement-api.conf]/content: content changed '{md5}611e31d39e1635bfabc0aafc51b43d0b' to '{md5}612d455490cfecc4b51db6656ea39240'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[placement_wsgi]/ensure: defined content as '{md5}2c992c50344eb1765282cb9fb70126db'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/Apache::Vhost[placement_wsgi]/Concat[10-placement_wsgi.conf]/File[/etc/httpd/conf.d/10-placement_wsgi.conf]/ensure: defined content as '{md5}0736aa6e5e26bedfe11b9ef7e39d7b59'", > "Notice: Applied catalog in 7.35 seconds", > " Total: 132", > " Success: 132", > " Changed: 132", > " Out of sync: 132", > " Total: 371", > " Skipped: 39", > " Package: 0.10", > " File: 0.50", > " Total: 11.85", > " Last run: 1529673041", > " Config retrieval: 5.08", > " Nova config: 6.15", > " Config: 1529673029", > "Gathering files modified after 2018-06-22 13:10:24.219421621 +0000", > "2018-06-22 13:10:43,441 DEBUG: 33379 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova_placement.origin_of_time", > "+ touch /var/lib/config-data/nova_placement.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 530]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 533]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Scope(Class[Nova::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova_placement", > "++ stat -c %y /var/lib/config-data/nova_placement.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:24.219421621 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_placement", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_placement", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova_placement.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova_placement --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova_placement --mtime=1970-01-01", > "2018-06-22 13:10:43,441 INFO: 33379 -- Removing container: docker-puppet-nova_placement", > "2018-06-22 13:10:43,489 DEBUG: 33379 -- docker-puppet-nova_placement", > "2018-06-22 13:10:43,490 INFO: 33379 -- Finished processing puppet configs for nova_placement", > "2018-06-22 13:10:43,490 INFO: 33379 -- Starting configuration of aodh using image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-22 13:10:43,490 DEBUG: 33379 -- config_volume aodh", > "2018-06-22 13:10:43,490 DEBUG: 33379 -- puppet_tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config", > "2018-06-22 13:10:43,490 DEBUG: 33379 -- manifest include tripleo::profile::base::aodh::api", > "include tripleo::profile::base::aodh::evaluator", > "include tripleo::profile::base::aodh::listener", > "include tripleo::profile::base::aodh::notifier", > "2018-06-22 13:10:43,490 DEBUG: 33379 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-22 13:10:43,491 DEBUG: 33379 -- volumes []", > "2018-06-22 13:10:43,492 INFO: 33379 -- Removing container: docker-puppet-aodh", > "2018-06-22 13:10:43,553 INFO: 33379 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-22 13:10:45,409 DEBUG: 33381 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-mariadb ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-mariadb", > "2ee1f6a99b58: Pulling fs layer", > "2ee1f6a99b58: Verifying Checksum", > "2ee1f6a99b58: Download complete", > "2ee1f6a99b58: Pull complete", > "Digest: sha256:2a886d2154594b405341b26bdc272a2796459d288a4fde8b2ee6f5ca253f6792", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 13:10:45,412 DEBUG: 33381 -- NET_HOST enabled", > "2018-06-22 13:10:45,412 DEBUG: 33381 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-clustercheck --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=clustercheck --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpA8ab_q:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 13:10:46,344 DEBUG: 33379 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-api", > "cb7d08d4cc0c: Pulling fs layer", > "6e57c8911d7b: Pulling fs layer", > "6e57c8911d7b: Verifying Checksum", > "6e57c8911d7b: Download complete", > "cb7d08d4cc0c: Verifying Checksum", > "cb7d08d4cc0c: Download complete", > "cb7d08d4cc0c: Pull complete", > "6e57c8911d7b: Pull complete", > "Digest: sha256:fa189b1bb39e6c29a0fe5a6e824ae0f89206ba6749e373e719edac2129e0ff6b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-22 13:10:46,348 DEBUG: 33379 -- NET_HOST enabled", > "2018-06-22 13:10:46,348 DEBUG: 33379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-aodh --env PUPPET_TAGS=file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config --env NAME=aodh --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpYr7YVi:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", > "2018-06-22 13:10:47,473 DEBUG: 33380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.10 seconds", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/plugins]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/port]/ensure: created", > "Notice: /Stage[main]/Sahara::Service::Api/Sahara_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Policy/Oslo::Policy[sahara_config]/Sahara_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Default[sahara_config]/Sahara_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Rabbit[sahara_config]/Sahara_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Zmq[sahara_config]/Sahara_config[DEFAULT/rpc_zmq_host]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 1.71 seconds", > " Total: 25", > " Success: 25", > " Total: 196", > " Skipped: 23", > " Out of sync: 25", > " Changed: 25", > " File: 0.00", > " Package: 0.05", > " Sahara config: 1.13", > " Last run: 1529673046", > " Config retrieval: 2.38", > " Total: 3.58", > " Config: 1529673042", > "Gathering files modified after 2018-06-22 13:10:37.276465155 +0000", > "2018-06-22 13:10:47,473 DEBUG: 33380 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template'", > "+ origin_of_time=/var/lib/config-data/sahara.origin_of_time", > "+ touch /var/lib/config-data/sahara.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template /etc/config.pp", > "Warning: ModuleLoader: module 'sahara' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/db.pp\", 69]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 380]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 381]", > "Warning: Scope(Class[Sahara]): The use_neutron parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Sahara]): sahara::admin_user, sahara::admin_password, sahara::auth_uri, sahara::identity_uri, sahara::admin_tenant_name and sahara::memcached_servers are deprecated. Please use sahara::keystone::authtoken::* parameters instead.", > "Warning: Scope(Class[Sahara::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/sahara", > "++ stat -c %y /var/lib/config-data/sahara.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:37.276465155 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/sahara", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/sahara", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/sahara.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/sahara --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/sahara --mtime=1970-01-01", > "2018-06-22 13:10:47,474 INFO: 33380 -- Removing container: docker-puppet-sahara", > "2018-06-22 13:10:47,509 DEBUG: 33380 -- docker-puppet-sahara", > "2018-06-22 13:10:47,509 INFO: 33380 -- Finished processing puppet configs for sahara", > "2018-06-22 13:10:47,510 INFO: 33380 -- Starting configuration of mysql using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 13:10:47,510 DEBUG: 33380 -- config_volume mysql", > "2018-06-22 13:10:47,510 DEBUG: 33380 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-22 13:10:47,510 DEBUG: 33380 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "2018-06-22 13:10:47,510 DEBUG: 33380 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 13:10:47,510 DEBUG: 33380 -- volumes []", > "2018-06-22 13:10:47,510 INFO: 33380 -- Removing container: docker-puppet-mysql", > "2018-06-22 13:10:47,556 INFO: 33380 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 13:10:47,559 DEBUG: 33380 -- NET_HOST enabled", > "2018-06-22 13:10:47,559 DEBUG: 33380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-mysql --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=mysql --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpb8KhP0:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", > "2018-06-22 13:10:52,521 DEBUG: 33381 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.43 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}5b8acaa58a90d174e15437cd06a5f6f1'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/content: content changed '{md5}9ff8cc688dd9f0dfc45e5afd25c427a7' to '{md5}7d37008224e71625019cb48768f267e7'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/mode: mode changed '0600' to '0644'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/Xinetd::Service[galera-monitor]/File[/etc/xinetd.d/galera-monitor]/ensure: defined content as '{md5}3afdef3c0450b1869412e40a88f2bfb2'", > "Notice: Applied catalog in 0.04 seconds", > " Total: 4", > " Success: 4", > " Total: 13", > " Out of sync: 3", > " Changed: 3", > " Skipped: 9", > " File: 0.02", > " Config retrieval: 0.58", > " Total: 0.60", > " Last run: 1529673051", > " Config: 1529673051", > "Gathering files modified after 2018-06-22 13:10:45.610491803 +0000", > "2018-06-22 13:10:52,521 DEBUG: 33381 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,file ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,file'", > "+ origin_of_time=/var/lib/config-data/clustercheck.origin_of_time", > "+ touch /var/lib/config-data/clustercheck.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,file /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/clustercheck", > "++ stat -c %y /var/lib/config-data/clustercheck.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:45.610491803 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/clustercheck", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/clustercheck", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/clustercheck.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/clustercheck --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/clustercheck --mtime=1970-01-01", > "2018-06-22 13:10:52,521 INFO: 33381 -- Removing container: docker-puppet-clustercheck", > "2018-06-22 13:10:52,557 DEBUG: 33381 -- docker-puppet-clustercheck", > "2018-06-22 13:10:52,557 INFO: 33381 -- Finished processing puppet configs for clustercheck", > "2018-06-22 13:10:52,557 INFO: 33381 -- Starting configuration of redis using image 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-22 13:10:52,557 DEBUG: 33381 -- config_volume redis", > "2018-06-22 13:10:52,558 DEBUG: 33381 -- puppet_tags file,file_line,concat,augeas,cron,exec", > "2018-06-22 13:10:52,558 DEBUG: 33381 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-06-22 13:10:52,558 DEBUG: 33381 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-22 13:10:52,558 DEBUG: 33381 -- volumes []", > "2018-06-22 13:10:52,558 INFO: 33381 -- Removing container: docker-puppet-redis", > "2018-06-22 13:10:52,622 INFO: 33381 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-22 13:10:56,065 DEBUG: 33381 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-redis ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-redis", > "13055d264df1: Pulling fs layer", > "dfc35b833f61: Pulling fs layer", > "13055d264df1: Verifying Checksum", > "13055d264df1: Download complete", > "13055d264df1: Pull complete", > "dfc35b833f61: Verifying Checksum", > "dfc35b833f61: Download complete", > "dfc35b833f61: Pull complete", > "Digest: sha256:7782f917270ad46f451fe06063a6adb53afe9d81474a7af374ed7b9c09d1b055", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-22 13:10:56,068 DEBUG: 33381 -- NET_HOST enabled", > "2018-06-22 13:10:56,068 DEBUG: 33381 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-redis --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec --env NAME=redis --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp1QE8HH:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", > "2018-06-22 13:10:57,964 DEBUG: 33380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.95 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/ensure: defined content as '{md5}e51811cf726fa3e6a5a924a379dc5198'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}5a169246460baf3e552027b0f5e8a1f8'", > "Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}af90358207ccfecae7af249d5ef7dd3e' to '{md5}da920df6baf6c7424ed796c11086927e'", > "Notice: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Tripleo::Pacemaker::Resource_restart_flag[galera-master]/File[/var/lib/tripleo]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Tripleo::Pacemaker::Resource_restart_flag[galera-master]/File[/var/lib/tripleo/pacemaker-restarts]/ensure: created", > "Notice: Applied catalog in 0.37 seconds", > " Total: 6", > " Success: 6", > " Skipped: 226", > " Total: 233", > " Out of sync: 6", > " Changed: 6", > " File: 0.03", > " Last run: 1529673056", > " Config retrieval: 4.37", > " Total: 4.40", > " Config: 1529673052", > "Gathering files modified after 2018-06-22 13:10:47.746498495 +0000", > "2018-06-22 13:10:57,965 DEBUG: 33380 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/mysql.origin_of_time", > "+ touch /var/lib/config-data/mysql.origin_of_time", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp\", 133]:[\"/etc/config.pp\", 4]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 103]:[\"/etc/config.pp\", 4]", > "Warning: ModuleLoader: module 'aodh' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/db/mysql.pp\", 58]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 175]", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'glance' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'heat' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'panko' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp\", 43]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/mysql", > "++ stat -c %y /var/lib/config-data/mysql.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:47.746498495 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/mysql", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/mysql", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/mysql.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/mysql --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/mysql --mtime=1970-01-01", > "2018-06-22 13:10:57,965 INFO: 33380 -- Removing container: docker-puppet-mysql", > "2018-06-22 13:10:57,999 DEBUG: 33380 -- docker-puppet-mysql", > "2018-06-22 13:10:57,999 INFO: 33380 -- Finished processing puppet configs for mysql", > "2018-06-22 13:10:58,000 INFO: 33380 -- Starting configuration of nova using image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-22 13:10:58,000 DEBUG: 33380 -- config_volume nova", > "2018-06-22 13:10:58,000 DEBUG: 33380 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config", > "2018-06-22 13:10:58,000 DEBUG: 33380 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::conductor", > "include tripleo::profile::base::nova::consoleauth", > "include tripleo::profile::base::nova::scheduler", > "include tripleo::profile::base::nova::vncproxy", > "2018-06-22 13:10:58,000 DEBUG: 33380 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-22 13:10:58,000 DEBUG: 33380 -- volumes []", > "2018-06-22 13:10:58,000 INFO: 33380 -- Removing container: docker-puppet-nova", > "2018-06-22 13:10:58,060 INFO: 33380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-22 13:10:59,344 DEBUG: 33380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-api", > "0e3031608420: Already exists", > "b32f33ab1345: Pulling fs layer", > "b32f33ab1345: Verifying Checksum", > "b32f33ab1345: Download complete", > "b32f33ab1345: Pull complete", > "Digest: sha256:98f38e1deb6081bcc8d18a914af693593a06823741381f71dacd158824ef18f8", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-22 13:10:59,347 DEBUG: 33380 -- NET_HOST enabled", > "2018-06-22 13:10:59,347 DEBUG: 33380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config --env NAME=nova --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpQvda0c:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", > "2018-06-22 13:11:00,075 DEBUG: 33379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/gnocchi_external_project_owner]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/host]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/port]/ensure: created", > "Notice: /Stage[main]/Aodh::Evaluator/Aodh_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Db/Oslo::Db[aodh_config]/Aodh_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Rabbit[aodh_config]/Aodh_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Default[aodh_config]/Aodh_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Policy/Oslo::Policy[aodh_config]/Aodh_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Oslo::Middleware[aodh_config]/Aodh_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}fc316e9d923e3a94945cfb8c64307e1d'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/owner: owner changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/group: group changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[aodh_wsgi]/ensure: defined content as '{md5}09d823939c45501c11f2096289fe70cf'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/Apache::Vhost[aodh_wsgi]/Concat[10-aodh_wsgi.conf]/File[/etc/httpd/conf.d/10-aodh_wsgi.conf]/ensure: defined content as '{md5}3a5e55367f0144775f4f683dd00c98a7'", > "Notice: Applied catalog in 2.47 seconds", > " Total: 112", > " Success: 112", > " Changed: 111", > " Out of sync: 111", > " Total: 331", > " Skipped: 40", > " File: 0.78", > " Aodh config: 0.87", > " Last run: 1529673058", > " Config retrieval: 4.45", > " Total: 6.18", > "Gathering files modified after 2018-06-22 13:10:46.564494798 +0000", > "2018-06-22 13:11:00,075 DEBUG: 33379 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config'", > "+ origin_of_time=/var/lib/config-data/aodh.origin_of_time", > "+ touch /var/lib/config-data/aodh.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/aodh.pp\", 123]", > "Warning: Scope(Class[Aodh::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/oslo/manifests/db.pp\", 140]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/aodh", > "++ stat -c %y /var/lib/config-data/aodh.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:46.564494798 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/aodh", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/aodh", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/aodh.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/aodh --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/aodh --mtime=1970-01-01", > "2018-06-22 13:11:00,075 INFO: 33379 -- Removing container: docker-puppet-aodh", > "2018-06-22 13:11:00,126 DEBUG: 33379 -- docker-puppet-aodh", > "2018-06-22 13:11:00,126 INFO: 33379 -- Finished processing puppet configs for aodh", > "2018-06-22 13:11:00,126 INFO: 33379 -- Starting configuration of heat_api using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 13:11:00,126 DEBUG: 33379 -- config_volume heat_api", > "2018-06-22 13:11:00,126 DEBUG: 33379 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-06-22 13:11:00,127 DEBUG: 33379 -- manifest include ::tripleo::profile::base::heat::api", > "2018-06-22 13:11:00,127 DEBUG: 33379 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 13:11:00,127 DEBUG: 33379 -- volumes []", > "2018-06-22 13:11:00,127 INFO: 33379 -- Removing container: docker-puppet-heat_api", > "2018-06-22 13:11:00,196 INFO: 33379 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 13:11:02,413 DEBUG: 33379 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api", > "15497368e843: Pulling fs layer", > "a91507f6d5dc: Pulling fs layer", > "a91507f6d5dc: Verifying Checksum", > "a91507f6d5dc: Download complete", > "15497368e843: Download complete", > "15497368e843: Pull complete", > "a91507f6d5dc: Pull complete", > "Digest: sha256:7e8eb4cb5943296bd67f2e22c40a7519d3c71f8533541c54da0c9f5ef6b361ce", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 13:11:02,417 DEBUG: 33379 -- NET_HOST enabled", > "2018-06-22 13:11:02,417 DEBUG: 33379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmphy003v:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 13:11:03,248 DEBUG: 33381 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.89 seconds", > "Notice: /Stage[main]/Redis::Config/File[/etc/redis]/ensure: created", > "Notice: /Stage[main]/Redis::Config/File[/var/log/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Config/File[/var/lib/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]/ensure: defined content as '{md5}a2f723773964f5ea42b6c7c5d6b72208'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]/mode: mode changed '0644' to '0444'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]/ensure: defined content as '{md5}94de54ece28c930b89fefe1be0a08a8f'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Triggered 'refresh' from 1 events", > "Notice: Applied catalog in 0.06 seconds", > " Restarted: 1", > " Skipped: 11", > " Total: 21", > " Exec: 0.00", > " Augeas: 0.01", > " Config retrieval: 1.04", > " Total: 1.07", > " Last run: 1529673062", > " Config: 1529673061", > "Gathering files modified after 2018-06-22 13:10:56.260524619 +0000", > "2018-06-22 13:11:03,248 DEBUG: 33381 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,exec ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec'", > "+ origin_of_time=/var/lib/config-data/redis.origin_of_time", > "+ touch /var/lib/config-data/redis.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec /etc/config.pp", > "Warning: ModuleLoader: module 'redis' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/redis", > "++ stat -c %y /var/lib/config-data/redis.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:56.260524619 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/redis", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/redis", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/redis.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/redis --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/redis --mtime=1970-01-01", > "2018-06-22 13:11:03,249 INFO: 33381 -- Removing container: docker-puppet-redis", > "2018-06-22 13:11:03,285 DEBUG: 33381 -- docker-puppet-redis", > "2018-06-22 13:11:03,285 INFO: 33381 -- Finished processing puppet configs for redis", > "2018-06-22 13:11:03,286 INFO: 33381 -- Starting configuration of keystone using image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-22 13:11:03,286 DEBUG: 33381 -- config_volume keystone", > "2018-06-22 13:11:03,286 DEBUG: 33381 -- puppet_tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config", > "2018-06-22 13:11:03,286 DEBUG: 33381 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "2018-06-22 13:11:03,286 DEBUG: 33381 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-22 13:11:03,286 DEBUG: 33381 -- volumes []", > "2018-06-22 13:11:03,287 INFO: 33381 -- Removing container: docker-puppet-keystone", > "2018-06-22 13:11:03,352 INFO: 33381 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-22 13:11:05,826 DEBUG: 33381 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-keystone ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-keystone", > "6222a19b9ac2: Pulling fs layer", > "900dd421e68b: Pulling fs layer", > "900dd421e68b: Download complete", > "6222a19b9ac2: Verifying Checksum", > "6222a19b9ac2: Download complete", > "6222a19b9ac2: Pull complete", > "900dd421e68b: Pull complete", > "Digest: sha256:5aaa5a4237af74f89ed31c8ff7e97414693ecfb9ce82bcb13f238c1a96030dc5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-22 13:11:05,829 DEBUG: 33381 -- NET_HOST enabled", > "2018-06-22 13:11:05,829 DEBUG: 33381 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-keystone --env PUPPET_TAGS=file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config --env NAME=keystone --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpQS7sc1:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-22 13:11:15,374 DEBUG: 33379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.61 seconds", > "Notice: /Stage[main]/Heat::Cron::Purge_deleted/Cron[heat-manage purge_deleted]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin_password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/username]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/password]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[clients_keystone/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[DEFAULT/max_json_body_size]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/limit_iterators]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/memory_quota]/ensure: created", > "Notice: /Stage[main]/Heat::Api/Heat_config[heat_api/bind_host]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/rpc_response_timeout]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Middleware[heat_config]/Heat_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/expose_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/max_age]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/allow_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Policy/Oslo::Policy[heat_config]/Heat_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}0b4bad3c8a21111582786caceb3bc55a'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[heat_api_wsgi]/ensure: defined content as '{md5}640891728ce5d46ae40234228561597c'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/Apache::Vhost[heat_api_wsgi]/Concat[10-heat_api_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_wsgi.conf]/ensure: defined content as '{md5}e7b2b5d57d7b13197d33bbcc8ee73b93'", > " Total: 121", > " Success: 121", > " Changed: 121", > " Out of sync: 121", > " Skipped: 32", > " Total: 335", > " Cron: 0.01", > " File: 0.35", > " Heat config: 1.47", > " Last run: 1529673073", > " Config retrieval: 4.18", > " Total: 6.07", > " Config: 1529673067", > "Gathering files modified after 2018-06-22 13:11:02.589543485 +0000", > "2018-06-22 13:11:15,374 DEBUG: 33379 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,heat_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/heat_api.origin_of_time", > "+ touch /var/lib/config-data/heat_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/db.pp\", 75]:[\"/etc/puppet/modules/heat/manifests/init.pp\", 363]", > "Warning: Scope(Class[Heat::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/heat.pp\", 134]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api", > "++ stat -c %y /var/lib/config-data/heat_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:11:02.589543485 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat_api --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat_api --mtime=1970-01-01", > "2018-06-22 13:11:15,374 INFO: 33379 -- Removing container: docker-puppet-heat_api", > "2018-06-22 13:11:15,416 DEBUG: 33379 -- docker-puppet-heat_api", > "2018-06-22 13:11:15,416 INFO: 33379 -- Finished processing puppet configs for heat_api", > "2018-06-22 13:11:15,417 INFO: 33379 -- Starting configuration of heat using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 13:11:15,417 DEBUG: 33379 -- config_volume heat", > "2018-06-22 13:11:15,417 DEBUG: 33379 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-06-22 13:11:15,417 DEBUG: 33379 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-06-22 13:11:15,417 DEBUG: 33379 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 13:11:15,417 DEBUG: 33379 -- volumes []", > "2018-06-22 13:11:15,417 INFO: 33379 -- Removing container: docker-puppet-heat", > "2018-06-22 13:11:15,463 INFO: 33379 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 13:11:15,466 DEBUG: 33379 -- NET_HOST enabled", > "2018-06-22 13:11:15,466 DEBUG: 33379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpFlERxd:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", > "2018-06-22 13:11:18,246 DEBUG: 33381 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.60 seconds", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_token]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/expiration]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[ssl/enable]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/template_file]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/provider]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/notification_format]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/admin_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/public_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/0]/ensure: defined content as '{md5}3ddf048c6871705212f4baf1cfefd644'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/1]/ensure: defined content as '{md5}647fa860739b2fc2966edcf071d44bce'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/0]/ensure: defined content as '{md5}a5a47011b0d90d93073fccce60578ec1'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/1]/ensure: defined content as '{md5}eeabf96eb5042b89a83b6e200a9e1507'", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/revoke_by_id]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/max_active_keys]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[credential/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone::Config/Keystone_config[ec2/driver]/ensure: created", > "Notice: /Stage[main]/Keystone::Cron::Token_flush/Cron[keystone-manage token_flush]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Keystone::Policy/Oslo::Policy[keystone_config]/Keystone_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Middleware[keystone_config]/Keystone_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Default[keystone_config]/Keystone_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}aa40eeefa414cf0235029477fb28fba9'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/File[keystone_wsgi_main]/ensure: defined content as '{md5}072422f0d75777ed1783e6910b3ddc58'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/File[keystone_wsgi_admin]/ensure: defined content as '{md5}d6dda52b0e14d80a652ecf42686d3962'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_gssapi.conf]/ensure: removed", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/Apache::Vhost[keystone_wsgi_main]/Concat[10-keystone_wsgi_main.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_main.conf]/ensure: defined content as '{md5}653272cb76fd2943463a866083dbbfde'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/Apache::Vhost[keystone_wsgi_admin]/Concat[10-keystone_wsgi_admin.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_admin.conf]/ensure: defined content as '{md5}b82460ec44e6c9b3e569f0be298c5774'", > "Notice: Applied catalog in 2.43 seconds", > " Total: 122", > " Success: 122", > " Changed: 122", > " Out of sync: 122", > " Total: 320", > " Skipped: 34", > " Package: 0.04", > " File: 0.36", > " Keystone config: 1.46", > " Last run: 1529673076", > " Config retrieval: 4.11", > " Total: 6.00", > " Config: 1529673070", > "Gathering files modified after 2018-06-22 13:11:06.002553468 +0000", > "2018-06-22 13:11:18,247 DEBUG: 33381 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config'", > "+ origin_of_time=/var/lib/config-data/keystone.origin_of_time", > "+ touch /var/lib/config-data/keystone.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/keystone/manifests/init.pp\", 757]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 760]:[\"/etc/config.pp\", 3]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 1108]:[\"/etc/config.pp\", 3]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/keystone", > "++ stat -c %y /var/lib/config-data/keystone.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:11:06.002553468 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/keystone", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/keystone", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/keystone.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/keystone --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/keystone --mtime=1970-01-01", > "2018-06-22 13:11:18,247 INFO: 33381 -- Removing container: docker-puppet-keystone", > "2018-06-22 13:11:18,292 DEBUG: 33381 -- docker-puppet-keystone", > "2018-06-22 13:11:18,292 INFO: 33381 -- Finished processing puppet configs for keystone", > "2018-06-22 13:11:18,293 INFO: 33381 -- Starting configuration of memcached using image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-22 13:11:18,293 DEBUG: 33381 -- config_volume memcached", > "2018-06-22 13:11:18,293 DEBUG: 33381 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-22 13:11:18,293 DEBUG: 33381 -- manifest include ::tripleo::profile::base::memcached", > "2018-06-22 13:11:18,293 DEBUG: 33381 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-22 13:11:18,293 DEBUG: 33381 -- volumes []", > "2018-06-22 13:11:18,293 INFO: 33381 -- Removing container: docker-puppet-memcached", > "2018-06-22 13:11:18,352 INFO: 33381 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-22 13:11:19,731 DEBUG: 33381 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-memcached ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-memcached", > "ca902f72935a: Pulling fs layer", > "ca902f72935a: Verifying Checksum", > "ca902f72935a: Download complete", > "ca902f72935a: Pull complete", > "Digest: sha256:d1285a1e78900b5c0c58e5c03f624e46f6b871ff4ffa9d972ef012568a9f1046", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-22 13:11:19,734 DEBUG: 33381 -- NET_HOST enabled", > "2018-06-22 13:11:19,734 DEBUG: 33381 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-memcached --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=memcached --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpE2s4O9:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", > "2018-06-22 13:11:21,708 DEBUG: 33380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.94 seconds", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}4f3bcbde7510fa19b7c63283a7470976'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[nova_api_wsgi]/ensure: defined content as '{md5}8bcfb466d72544dd31a4f339243ed669'", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/instance_name_template]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[wsgi/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/use_forwarded_for]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/fping_path]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/service_metadata_proxy]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Conductor/Nova_config[conductor/workers]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/driver]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/discover_hosts_in_cells_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[scheduler/max_attempts]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/host_subset_size]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_io_ops_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_instances_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/weight_classes]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_port]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/auth_schemes]/ensure: created", > "Notice: /Stage[main]/Nova::Policy/Oslo::Policy[nova_config]/Nova_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Oslo::Middleware[nova_config]/Nova_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Archive_deleted_rows/Cron[nova-manage db archive_deleted_rows]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Purge_shadow_tables/Cron[nova-manage db purge]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/Apache::Vhost[nova_api_wsgi]/Concat[10-nova_api_wsgi.conf]/File[/etc/httpd/conf.d/10-nova_api_wsgi.conf]/ensure: defined content as '{md5}5fb7a8f737662544790610b5d8f92ceb'", > "Notice: Applied catalog in 9.67 seconds", > " Total: 180", > " Success: 180", > " Changed: 180", > " Out of sync: 180", > " Total: 501", > " Skipped: 75", > " Cron: 0.02", > " Package: 0.09", > " File: 0.21", > " Total: 14.52", > " Last run: 1529673079", > " Config retrieval: 5.67", > " Nova config: 8.51", > " Config: 1529673064", > "Gathering files modified after 2018-06-22 13:10:59.568534538 +0000", > "2018-06-22 13:11:21,709 DEBUG: 33380 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova.origin_of_time", > "+ touch /var/lib/config-data/nova.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 92]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 533]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 92]", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/nova/manifests/scheduler/filter.pp\", 150]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/scheduler.pp\", 32]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova", > "++ stat -c %y /var/lib/config-data/nova.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:10:59.568534538 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova --mtime=1970-01-01", > "2018-06-22 13:11:21,709 INFO: 33380 -- Removing container: docker-puppet-nova", > "2018-06-22 13:11:21,766 DEBUG: 33380 -- docker-puppet-nova", > "2018-06-22 13:11:21,766 INFO: 33380 -- Finished processing puppet configs for nova", > "2018-06-22 13:11:21,766 INFO: 33380 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 13:11:21,766 DEBUG: 33380 -- config_volume iscsid", > "2018-06-22 13:11:21,766 DEBUG: 33380 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-06-22 13:11:21,766 DEBUG: 33380 -- manifest include ::tripleo::profile::base::iscsid", > "2018-06-22 13:11:21,767 DEBUG: 33380 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 13:11:21,767 DEBUG: 33380 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-06-22 13:11:21,767 INFO: 33380 -- Removing container: docker-puppet-iscsid", > "2018-06-22 13:11:21,829 INFO: 33380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 13:11:22,445 DEBUG: 33380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "ab4eae34093d: Pulling fs layer", > "ab4eae34093d: Verifying Checksum", > "ab4eae34093d: Download complete", > "ab4eae34093d: Pull complete", > "Digest: sha256:a46aa93fee87b0f173118da5c2a18dc271772adb839a481ec07f2a53534ac53c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 13:11:22,448 DEBUG: 33380 -- NET_HOST enabled", > "2018-06-22 13:11:22,448 DEBUG: 33380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmplT58qo:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", > "2018-06-22 13:11:25,713 DEBUG: 33381 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.61 seconds", > "Notice: /Stage[main]/Memcached/File[/etc/sysconfig/memcached]/content: content changed '{md5}a50ed62e82d31fb4cb2de2226650c545' to '{md5}b2122e2e949e073bd7247089cc6c41bf'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d/memcached.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: Applied catalog in 0.08 seconds", > " Total: 3", > " Success: 3", > " Skipped: 10", > " Config retrieval: 0.72", > " Total: 0.74", > " Last run: 1529673085", > " Config: 1529673084", > "Gathering files modified after 2018-06-22 13:11:19.911592798 +0000", > "2018-06-22 13:11:25,713 DEBUG: 33381 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/memcached.origin_of_time", > "+ touch /var/lib/config-data/memcached.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/memcached", > "++ stat -c %y /var/lib/config-data/memcached.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:11:19.911592798 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/memcached", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/memcached", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/memcached.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/memcached --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/memcached --mtime=1970-01-01", > "2018-06-22 13:11:25,714 INFO: 33381 -- Removing container: docker-puppet-memcached", > "2018-06-22 13:11:25,760 DEBUG: 33381 -- docker-puppet-memcached", > "2018-06-22 13:11:25,760 INFO: 33381 -- Finished processing puppet configs for memcached", > "2018-06-22 13:11:25,760 INFO: 33381 -- Starting configuration of panko using image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-22 13:11:25,760 DEBUG: 33381 -- config_volume panko", > "2018-06-22 13:11:25,760 DEBUG: 33381 -- puppet_tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config", > "2018-06-22 13:11:25,761 DEBUG: 33381 -- manifest include tripleo::profile::base::panko::api", > "2018-06-22 13:11:25,761 DEBUG: 33381 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-22 13:11:25,761 DEBUG: 33381 -- volumes []", > "2018-06-22 13:11:25,761 INFO: 33381 -- Removing container: docker-puppet-panko", > "2018-06-22 13:11:25,828 INFO: 33381 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-22 13:11:26,026 DEBUG: 33379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.03 seconds", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/auth_encryption_key]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_metadata_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_waitcondition_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_resources_per_stack]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/num_engine_workers]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/convergence_engine]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/reauthentication_auth_method]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_nested_stack_depth]/ensure: created", > "Notice: Applied catalog in 1.98 seconds", > " Total: 48", > " Success: 48", > " Skipped: 21", > " Total: 223", > " Out of sync: 48", > " Changed: 48", > " Heat config: 1.66", > " Last run: 1529673084", > " Config retrieval: 2.34", > " Total: 4.07", > " Config: 1529673080", > "Gathering files modified after 2018-06-22 13:11:15.653580983 +0000", > "2018-06-22 13:11:26,026 DEBUG: 33379 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat.origin_of_time", > "+ touch /var/lib/config-data/heat.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat", > "++ stat -c %y /var/lib/config-data/heat.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:11:15.653580983 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat --mtime=1970-01-01", > "2018-06-22 13:11:26,026 INFO: 33379 -- Removing container: docker-puppet-heat", > "2018-06-22 13:11:26,070 DEBUG: 33379 -- docker-puppet-heat", > "2018-06-22 13:11:26,070 INFO: 33379 -- Finished processing puppet configs for heat", > "2018-06-22 13:11:26,071 INFO: 33379 -- Starting configuration of cinder using image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-22 13:11:26,071 DEBUG: 33379 -- config_volume cinder", > "2018-06-22 13:11:26,071 DEBUG: 33379 -- puppet_tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line", > "2018-06-22 13:11:26,071 DEBUG: 33379 -- manifest include ::tripleo::profile::base::cinder::api", > "include ::tripleo::profile::base::cinder::backup::ceph", > "include ::tripleo::profile::base::cinder::scheduler", > "include ::tripleo::profile::base::lvm", > "2018-06-22 13:11:26,071 DEBUG: 33379 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-22 13:11:26,071 DEBUG: 33379 -- volumes []", > "2018-06-22 13:11:26,071 INFO: 33379 -- Removing container: docker-puppet-cinder", > "2018-06-22 13:11:26,146 INFO: 33379 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-22 13:11:28,422 DEBUG: 33381 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-panko-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-panko-api", > "e67be68e6dd6: Pulling fs layer", > "37e4d86c7a37: Pulling fs layer", > "37e4d86c7a37: Verifying Checksum", > "37e4d86c7a37: Download complete", > "e67be68e6dd6: Verifying Checksum", > "e67be68e6dd6: Download complete", > "e67be68e6dd6: Pull complete", > "37e4d86c7a37: Pull complete", > "Digest: sha256:af7f2810620f1617a589387bcde33173bbf96ee4d0ea85e34d70bdfd83328d21", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-22 13:11:28,425 DEBUG: 33381 -- NET_HOST enabled", > "2018-06-22 13:11:28,425 DEBUG: 33381 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-panko --env PUPPET_TAGS=file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config --env NAME=panko --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp9JFIYf:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", > "2018-06-22 13:11:28,459 DEBUG: 33380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.47 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > "Notice: Applied catalog in 0.07 seconds", > " Total: 2", > " Success: 2", > " Total: 10", > " Out of sync: 2", > " Changed: 2", > " Skipped: 8", > " Exec: 0.02", > " Config retrieval: 0.60", > " Total: 0.62", > " Last run: 1529673087", > " Config: 1529673087", > "Gathering files modified after 2018-06-22 13:11:22.626600229 +0000", > "2018-06-22 13:11:28,459 DEBUG: 33380 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:11:22.626600229 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/iscsid --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-06-22 13:11:28,459 INFO: 33380 -- Removing container: docker-puppet-iscsid", > "2018-06-22 13:11:28,500 DEBUG: 33380 -- docker-puppet-iscsid", > "2018-06-22 13:11:28,500 INFO: 33380 -- Finished processing puppet configs for iscsid", > "2018-06-22 13:11:28,500 INFO: 33380 -- Starting configuration of glance_api using image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-22 13:11:28,500 DEBUG: 33380 -- config_volume glance_api", > "2018-06-22 13:11:28,500 DEBUG: 33380 -- puppet_tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-06-22 13:11:28,500 DEBUG: 33380 -- manifest include ::tripleo::profile::base::glance::api", > "2018-06-22 13:11:28,500 DEBUG: 33380 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-22 13:11:28,501 DEBUG: 33380 -- volumes []", > "2018-06-22 13:11:28,501 INFO: 33380 -- Removing container: docker-puppet-glance_api", > "2018-06-22 13:11:28,575 INFO: 33380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-22 13:11:34,262 DEBUG: 33379 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-api", > "5e7b63a88a76: Pulling fs layer", > "56e05018c234: Pulling fs layer", > "56e05018c234: Verifying Checksum", > "56e05018c234: Download complete", > "5e7b63a88a76: Verifying Checksum", > "5e7b63a88a76: Download complete", > "5e7b63a88a76: Pull complete", > "56e05018c234: Pull complete", > "Digest: sha256:183deb2657acebac30853e0973dad9bbf1f1f1288cff99eeb24fb4ae2fc7b1d3", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-22 13:11:34,267 DEBUG: 33379 -- NET_HOST enabled", > "2018-06-22 13:11:34,267 DEBUG: 33379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-cinder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line --env NAME=cinder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpY_jGhf:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", > "2018-06-22 13:11:34,425 DEBUG: 33380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-glance-api ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-glance-api", > "a5deab52212a: Pulling fs layer", > "8b31454e1757: Pulling fs layer", > "8b31454e1757: Verifying Checksum", > "8b31454e1757: Download complete", > "a5deab52212a: Verifying Checksum", > "a5deab52212a: Download complete", > "a5deab52212a: Pull complete", > "8b31454e1757: Pull complete", > "Digest: sha256:266d9d00d90cc84effdabd7cad9bea244a8fb918a029a3d2bafa4e2af9a72e77", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-22 13:11:34,428 DEBUG: 33380 -- NET_HOST enabled", > "2018-06-22 13:11:34,428 DEBUG: 33380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-glance_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config --env NAME=glance_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpWWdij7:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", > "2018-06-22 13:11:40,119 DEBUG: 33381 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.48 seconds", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/host]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/port]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/workers]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_api_paste_ini[pipeline:main/pipeline]/ensure: created", > "Notice: /Stage[main]/Panko::Expirer/Cron[panko-expirer]/ensure: created", > "Notice: /Stage[main]/Panko::Logging/Oslo::Log[panko_config]/Panko_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Panko::Db/Oslo::Db[panko_config]/Panko_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Panko::Policy/Oslo::Policy[panko_config]/Panko_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Oslo::Middleware[panko_config]/Panko_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}83ed74d75e6969c931075bd7f8c4c5c6'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[/var/www/cgi-bin/panko]/ensure: created", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[panko_wsgi]/ensure: defined content as '{md5}e6f446b6267321fd2251a3e83021181a'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/Apache::Vhost[panko_wsgi]/Concat[10-panko_wsgi.conf]/File[/etc/httpd/conf.d/10-panko_wsgi.conf]/ensure: defined content as '{md5}bfdade05977c387c2e864c291e53d1ec'", > "Notice: Applied catalog in 1.18 seconds", > " Total: 101", > " Success: 101", > " Changed: 101", > " Out of sync: 101", > " Total: 255", > " Panko api paste ini: 0.00", > " File: 0.30", > " Panko config: 0.35", > " Last run: 1529673098", > " Config retrieval: 3.94", > " Total: 4.65", > " Config: 1529673093", > "Gathering files modified after 2018-06-22 13:11:28.661616468 +0000", > "2018-06-22 13:11:40,120 DEBUG: 33381 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config'", > "+ origin_of_time=/var/lib/config-data/panko.origin_of_time", > "+ touch /var/lib/config-data/panko.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko.pp\", 32]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/db.pp\", 59]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko/api.pp\", 83]", > "Warning: Scope(Class[Panko::Api]): This Class is deprecated and will be removed in future releases.", > "Warning: Scope(Class[Panko::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/panko", > "++ stat -c %y /var/lib/config-data/panko.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:11:28.661616468 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/panko", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/panko", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/panko.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/panko --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/panko --mtime=1970-01-01", > "2018-06-22 13:11:40,120 INFO: 33381 -- Removing container: docker-puppet-panko", > "2018-06-22 13:11:40,166 DEBUG: 33381 -- docker-puppet-panko", > "2018-06-22 13:11:40,166 INFO: 33381 -- Finished processing puppet configs for panko", > "2018-06-22 13:11:40,167 INFO: 33381 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:11:40,167 DEBUG: 33381 -- config_volume crond", > "2018-06-22 13:11:40,167 DEBUG: 33381 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-06-22 13:11:40,167 DEBUG: 33381 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-06-22 13:11:40,167 DEBUG: 33381 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:11:40,167 DEBUG: 33381 -- volumes []", > "2018-06-22 13:11:40,167 INFO: 33381 -- Removing container: docker-puppet-crond", > "2018-06-22 13:11:40,227 INFO: 33381 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:11:40,709 DEBUG: 33381 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "a94d9ea04263: Pulling fs layer", > "a94d9ea04263: Verifying Checksum", > "a94d9ea04263: Download complete", > "a94d9ea04263: Pull complete", > "Digest: sha256:cbc58f1f133447db6c3e634ca05251825f6a2ede8528959b5cd6e0cb1c3de3ba", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:11:40,712 DEBUG: 33381 -- NET_HOST enabled", > "2018-06-22 13:11:40,712 DEBUG: 33381 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpQNp8YY:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", > "2018-06-22 13:11:45,852 DEBUG: 33381 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.41 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}13ae5d5b43716a32da6855edd3f15758'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.03 seconds", > " Skipped: 7", > " Total: 9", > " Config retrieval: 0.50", > " Total: 0.51", > " Last run: 1529673105", > " Config: 1529673104", > "Gathering files modified after 2018-06-22 13:11:40.887648218 +0000", > "2018-06-22 13:11:45,852 DEBUG: 33381 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:11:40.887648218 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-06-22 13:11:45,852 INFO: 33381 -- Removing container: docker-puppet-crond", > "2018-06-22 13:11:45,903 DEBUG: 33381 -- docker-puppet-crond", > "2018-06-22 13:11:45,903 INFO: 33381 -- Finished processing puppet configs for crond", > "2018-06-22 13:11:45,904 INFO: 33381 -- Starting configuration of haproxy using image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-22 13:11:45,904 DEBUG: 33381 -- config_volume haproxy", > "2018-06-22 13:11:45,904 DEBUG: 33381 -- puppet_tags file,file_line,concat,augeas,cron,haproxy_config", > "2018-06-22 13:11:45,904 DEBUG: 33381 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "2018-06-22 13:11:45,904 DEBUG: 33381 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-22 13:11:45,904 DEBUG: 33381 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-06-22 13:11:45,904 INFO: 33381 -- Removing container: docker-puppet-haproxy", > "2018-06-22 13:11:45,972 INFO: 33381 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-22 13:11:46,324 DEBUG: 33380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.17 seconds", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_port]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/workers]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_image_direct_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_multiple_locations]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_cache_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enabled_import_methods]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/node_staging_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_member_quota]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v1_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v2_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/stores]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[paste_deploy/flavor]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_user]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_pool]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/default_store]/ensure: created", > "Notice: /Stage[main]/Glance::Policy/Oslo::Policy[glance_api_config]/Glance_api_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Db/Oslo::Db[glance_api_config]/Glance_api_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Oslo::Middleware[glance_api_config]/Glance_api_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_api_config]/Glance_api_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Default[glance_api_config]/Glance_api_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 2.46 seconds", > " Total: 44", > " Success: 44", > " Out of sync: 44", > " Changed: 44", > " Skipped: 59", > " Glance cache config: 0.23", > " Glance api config: 1.91", > " Config retrieval: 2.51", > " Total: 4.72", > " Config: 1529673100", > "Gathering files modified after 2018-06-22 13:11:35.371634081 +0000", > "2018-06-22 13:11:46,324 DEBUG: 33380 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config'", > "+ origin_of_time=/var/lib/config-data/glance_api.origin_of_time", > "+ touch /var/lib/config-data/glance_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/config.pp\", 48]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/glance/api.pp\", 202]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/api/db.pp\", 69]:[\"/etc/puppet/modules/glance/manifests/api.pp\", 371]", > "Warning: Unknown variable: 'default_store_real'. at /etc/puppet/modules/glance/manifests/api.pp:438:9", > "Warning: Scope(Class[Glance::Api]): default_store not provided, it will be automatically set to http", > "Warning: Scope(Class[Glance::Api::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/glance_api", > "++ stat -c %y /var/lib/config-data/glance_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:11:35.371634081 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/glance_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/glance_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/glance_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/glance_api --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/glance_api --mtime=1970-01-01", > "2018-06-22 13:11:46,324 INFO: 33380 -- Removing container: docker-puppet-glance_api", > "2018-06-22 13:11:46,368 DEBUG: 33380 -- docker-puppet-glance_api", > "2018-06-22 13:11:46,369 INFO: 33380 -- Finished processing puppet configs for glance_api", > "2018-06-22 13:11:46,369 INFO: 33380 -- Starting configuration of rabbitmq using image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-22 13:11:46,369 DEBUG: 33380 -- config_volume rabbitmq", > "2018-06-22 13:11:46,369 DEBUG: 33380 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-06-22 13:11:46,369 DEBUG: 33380 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "2018-06-22 13:11:46,369 DEBUG: 33380 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-22 13:11:46,369 DEBUG: 33380 -- volumes []", > "2018-06-22 13:11:46,370 INFO: 33380 -- Removing container: docker-puppet-rabbitmq", > "2018-06-22 13:11:46,441 INFO: 33380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-22 13:11:49,922 DEBUG: 33381 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-haproxy ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-haproxy", > "a82042577283: Pulling fs layer", > "a82042577283: Verifying Checksum", > "a82042577283: Download complete", > "a82042577283: Pull complete", > "Digest: sha256:79a7901cc6403d11b4e7f6978d7e99a1879972ccb61f430f5660695c8683d7a0", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-22 13:11:49,925 DEBUG: 33381 -- NET_HOST enabled", > "2018-06-22 13:11:49,925 DEBUG: 33381 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-haproxy --env PUPPET_TAGS=file,file_line,concat,augeas,cron,haproxy_config --env NAME=haproxy --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp1Fzxo7:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/ipa/ca.crt:/etc/ipa/ca.crt:ro --volume /etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro --volume /etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro --volume /etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", > "2018-06-22 13:11:51,335 DEBUG: 33380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-rabbitmq ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-rabbitmq", > "e603d701fd04: Pulling fs layer", > "e603d701fd04: Verifying Checksum", > "e603d701fd04: Download complete", > "e603d701fd04: Pull complete", > "Digest: sha256:4e07b8b4fd82b69e2a7ba105447776e730b0dd8fffa70a2f13c5c0e612b1ccdc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-22 13:11:51,338 DEBUG: 33380 -- NET_HOST enabled", > "2018-06-22 13:11:51,338 DEBUG: 33380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-rabbitmq --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=rabbitmq --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpRTiLKx:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", > "2018-06-22 13:11:51,828 DEBUG: 33379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.07 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Lvm/Augeas[udev options in lvm.conf]/returns: executed successfully", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}7dbba0ad6f107a5d6775f284addccc35'", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_version]/ensure: created", > "Notice: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_workers]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/nova_catalog_info]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_user]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_chunk_size]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_pool]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_unit]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_count]/ensure: created", > "Notice: /Stage[main]/Cinder::Scheduler/Cinder_config[DEFAULT/scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Policy/Oslo::Policy[cinder_config]/Cinder_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Oslo::Middleware[cinder_config]/Cinder_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/File[cinder_wsgi]/ensure: defined content as '{md5}870efbe437d63cd260287cd36472d7b1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/Apache::Vhost[cinder_wsgi]/Concat[10-cinder_wsgi.conf]/File[/etc/httpd/conf.d/10-cinder_wsgi.conf]/ensure: defined content as '{md5}083eb77078c11a38e340afdc95d1c1aa'", > "Notice: Applied catalog in 5.01 seconds", > " Total: 134", > " Success: 134", > " Changed: 134", > " Out of sync: 134", > " Skipped: 36", > " Total: 374", > " File line: 0.00", > " File: 0.31", > " Augeas: 0.63", > " Last run: 1529673109", > " Cinder config: 3.39", > " Config retrieval: 4.73", > " Total: 9.13", > "Gathering files modified after 2018-06-22 13:11:34.448631685 +0000", > "2018-06-22 13:11:51,828 DEBUG: 33379 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/cinder.origin_of_time", > "+ touch /var/lib/config-data/cinder.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/db.pp\", 69]:[\"/etc/puppet/modules/cinder/manifests/init.pp\", 320]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder.pp\", 127]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/api.pp\", 203]:[\"/etc/config.pp\", 2]", > "Warning: Scope(Class[Cinder::Api]): The nova_catalog_admin_info parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Cinder::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/backup.pp:83:18", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/volume.pp:64:18", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/cinder", > "++ stat -c %y /var/lib/config-data/cinder.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:11:34.448631685 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/cinder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/cinder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/cinder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/cinder --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/cinder --mtime=1970-01-01", > "2018-06-22 13:11:51,829 INFO: 33379 -- Removing container: docker-puppet-cinder", > "2018-06-22 13:11:51,874 DEBUG: 33379 -- docker-puppet-cinder", > "2018-06-22 13:11:51,874 INFO: 33379 -- Finished processing puppet configs for cinder", > "2018-06-22 13:11:51,874 INFO: 33379 -- Starting configuration of swift using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 13:11:51,874 DEBUG: 33379 -- config_volume swift", > "2018-06-22 13:11:51,875 DEBUG: 33379 -- puppet_tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-06-22 13:11:51,875 DEBUG: 33379 -- manifest include ::tripleo::profile::base::swift::proxy", > "include ::tripleo::profile::base::swift::storage", > "2018-06-22 13:11:51,875 DEBUG: 33379 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 13:11:51,875 DEBUG: 33379 -- volumes []", > "2018-06-22 13:11:51,875 INFO: 33379 -- Removing container: docker-puppet-swift", > "2018-06-22 13:11:51,922 INFO: 33379 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 13:11:51,925 DEBUG: 33379 -- NET_HOST enabled", > "2018-06-22 13:11:51,925 DEBUG: 33379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift --env PUPPET_TAGS=file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server --env NAME=swift --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmppG_2_I:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", > "2018-06-22 13:11:59,794 DEBUG: 33379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.64 seconds", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/api_class]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/username]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.16:11211'", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/auto_create_account_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/concurrency]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/expiring_objects_account_name]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/process]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/processes]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/reclaim_age]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/recon_cache_path]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/report_interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_level]/ensure: created", > "Notice: /Stage[main]/Rsync::Server/Xinetd::Service[rsync]/File[/rsync]/ensure: defined content as '{md5}9389435d40399d3f3b3a0e9944346f87'", > "Notice: /Stage[main]/Rsync::Server/Concat[/etc/rsyncd.conf]/File[/etc/rsyncd.conf]/content: content changed '{md5}c63fccb45c0dcbbbe17d0f4bdba920ec' to '{md5}9b8125614d1860f206abb9767c7b2557'", > "Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed '%SWIFT_HASH_PATH_SUFFIX%' to 'OJ2m4Tm9Ho10GUzJVC46bPi1G'", > "Notice: /Stage[main]/Swift/Swift_config[swift-constraints/max_header_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/bind_ip]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/workers]/value: value changed '8' to 'auto'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_headers]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[pipeline:main/pipeline]/value: value changed 'catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server' to 'catch_errors healthcheck proxy-logging cache ratelimit bulk tempurl formpost authtoken keystone staticweb copy container_quotas account_quotas slo dlo versioned_writes proxy-logging proxy-server'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/log_handoffs]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/allow_account_management]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/account_autocreate]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/node_timeout]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Cache/Swift_proxy_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.16:11211'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/operator_roles]/value: value changed 'admin, SwiftOperator' to 'admin, swiftoperator, ResellerAdmin'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/reseller_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/File[/var/cache/swift]/mode: mode changed '0755' to '0700'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/signing_dir]/value: value changed '/tmp/keystone-signing-swift' to '/var/cache/swift'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_plugin]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/username]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/password]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/delay_auth_decision]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/cache]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/include_service_catalog]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/url_base]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/clock_accuracy]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/max_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/log_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/rate_buffer_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/account_ratelimit]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Formpost/Swift_proxy_config[filter:formpost/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_containers_per_extraction]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_failed_extractions]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_deletes_per_request]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/yield_frequency]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Versioned_writes/Swift_proxy_config[filter:versioned_writes/allow_versioned_writes]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_segments]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/min_segment_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Copy/Swift_proxy_config[filter:copy/object_post_as_copy]/value: value changed 'false' to 'True'", > "Notice: /Stage[main]/Swift::Proxy::Container_quotas/Swift_proxy_config[filter:container_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Account_quotas/Swift_proxy_config[filter:account_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/disable_encryption]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/keymaster_config_path]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/auth_pipeline_check]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/auth_uri]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node/d1]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/ensure: defined content as '{md5}83d99714b5d1e495a61737a51a8170ec'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/ensure: defined content as '{md5}578dba3f3fc75f3e5b6335031df3cec8'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/ensure: defined content as '{md5}69f91109e3d7181d7f2d08af24922938'", > "Notice: Applied catalog in 0.49 seconds", > " Total: 97", > " Success: 97", > " Total: 192", > " Skipped: 37", > " Out of sync: 97", > " Changed: 97", > " Swift config: 0.00", > " Swift keymaster config: 0.01", > " Swift object expirer config: 0.01", > " File: 0.04", > " Swift proxy config: 0.19", > " Last run: 1529673118", > " Config retrieval: 2.01", > " Total: 2.27", > " Config: 1529673116", > "Gathering files modified after 2018-06-22 13:11:52.105676048 +0000", > "2018-06-22 13:11:59,795 DEBUG: 33379 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server'", > "+ origin_of_time=/var/lib/config-data/swift.origin_of_time", > "+ touch /var/lib/config-data/swift.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 147]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 163]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 165]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > "Warning: Unknown variable: 'methods_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:100:56", > "Warning: Unknown variable: 'incoming_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:101:56", > "Warning: Unknown variable: 'incoming_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:102:56", > "Warning: Unknown variable: 'outgoing_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:103:56", > "Warning: Unknown variable: 'outgoing_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:104:56", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the object storage server has changed from 6000 to 6200 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the container storage server has changed from 6001 to 6201 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the account storage server has changed from 6002 to 6202 and will be changed in a later release", > "Warning: Class 'xinetd' is already defined at /etc/config.pp:6; cannot redefine at /etc/puppet/modules/xinetd/manifests/init.pp:12", > "Warning: Unknown variable: 'xinetd::params::default_user'. at /etc/puppet/modules/xinetd/manifests/service.pp:110:14", > "Warning: Unknown variable: 'xinetd::params::default_group'. at /etc/puppet/modules/xinetd/manifests/service.pp:116:15", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:161:13", > "Warning: Unknown variable: 'xinetd::service_name'. at /etc/puppet/modules/xinetd/manifests/service.pp:166:24", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:167:21", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 183]:", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 197]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift", > "++ stat -c %y /var/lib/config-data/swift.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:11:52.105676048 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/swift --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/swift --mtime=1970-01-01", > "2018-06-22 13:11:59,795 INFO: 33379 -- Removing container: docker-puppet-swift", > "2018-06-22 13:11:59,847 DEBUG: 33379 -- docker-puppet-swift", > "2018-06-22 13:11:59,847 INFO: 33379 -- Finished processing puppet configs for swift", > "2018-06-22 13:11:59,848 INFO: 33379 -- Starting configuration of heat_api_cfn using image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-22 13:11:59,848 DEBUG: 33379 -- config_volume heat_api_cfn", > "2018-06-22 13:11:59,848 DEBUG: 33379 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-06-22 13:11:59,848 DEBUG: 33379 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-06-22 13:11:59,848 DEBUG: 33379 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-22 13:11:59,848 DEBUG: 33379 -- volumes []", > "2018-06-22 13:11:59,848 INFO: 33379 -- Removing container: docker-puppet-heat_api_cfn", > "2018-06-22 13:11:59,854 DEBUG: 33381 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.63 seconds", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}3e602920be68dd9114246aadb54dcae7'", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/mode: mode changed '0644' to '0640'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]/File[/var/lib/tripleo]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]/File[/var/lib/tripleo/pacemaker-restarts]/ensure: created", > "Notice: Applied catalog in 0.34 seconds", > " Skipped: 33", > " Total: 79", > " Config retrieval: 2.93", > " Total: 2.96", > " Config: 1529673115", > "Gathering files modified after 2018-06-22 13:11:50.127671228 +0000", > "2018-06-22 13:11:59,854 DEBUG: 33381 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,haproxy_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,haproxy_config'", > "+ origin_of_time=/var/lib/config-data/haproxy.origin_of_time", > "+ touch /var/lib/config-data/haproxy.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,haproxy_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp\", 65]:", > "Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/haproxy", > "++ stat -c %y /var/lib/config-data/haproxy.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:11:50.127671228 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/haproxy", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/haproxy", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/haproxy.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/haproxy --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/haproxy --mtime=1970-01-01", > "2018-06-22 13:11:59,854 INFO: 33381 -- Removing container: docker-puppet-haproxy", > "2018-06-22 13:11:59,896 DEBUG: 33381 -- docker-puppet-haproxy", > "2018-06-22 13:11:59,896 INFO: 33381 -- Finished processing puppet configs for haproxy", > "2018-06-22 13:11:59,896 INFO: 33381 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 13:11:59,896 DEBUG: 33381 -- config_volume ceilometer", > "2018-06-22 13:11:59,897 DEBUG: 33381 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config", > "2018-06-22 13:11:59,897 DEBUG: 33381 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-06-22 13:11:59,897 DEBUG: 33381 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 13:11:59,897 DEBUG: 33381 -- volumes []", > "2018-06-22 13:11:59,897 INFO: 33381 -- Removing container: docker-puppet-ceilometer", > "2018-06-22 13:11:59,913 INFO: 33379 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-22 13:11:59,963 INFO: 33381 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 13:12:00,551 DEBUG: 33379 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn", > "15497368e843: Already exists", > "4089b2a1d02c: Pulling fs layer", > "4089b2a1d02c: Verifying Checksum", > "4089b2a1d02c: Download complete", > "4089b2a1d02c: Pull complete", > "Digest: sha256:bbcf3cc8eeb6d8910642b40cfa9fe544a33bee49cfb4512abe49c5bf176ed8f0", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-22 13:12:00,555 DEBUG: 33379 -- NET_HOST enabled", > "2018-06-22 13:12:00,555 DEBUG: 33379 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api_cfn --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api_cfn --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpniErGF:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", > "2018-06-22 13:12:02,335 DEBUG: 33381 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "333aa6b2b383: Pulling fs layer", > "1eb9ef5adcb4: Pulling fs layer", > "333aa6b2b383: Verifying Checksum", > "1eb9ef5adcb4: Verifying Checksum", > "1eb9ef5adcb4: Download complete", > "333aa6b2b383: Pull complete", > "1eb9ef5adcb4: Pull complete", > "Digest: sha256:3f638e03aaf1d7e303183e06ff1627a5a0efeaef228a7be1e9667ae62d7d6a1b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 13:12:02,338 DEBUG: 33381 -- NET_HOST enabled", > "2018-06-22 13:12:02,338 DEBUG: 33381 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpwRzJrx:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", > "2018-06-22 13:12:02,699 DEBUG: 33380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.85 seconds", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/group: group changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{md5}b126e4b8423a26246952d34c225c6fdd'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{md5}12f8d1a1f9f57f23c1be6c7bf2286e73'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/ensure: defined content as '{md5}44d4ef5cb86ab30e6127e83939ef09c4'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/ensure: defined content as '{md5}91d370d2c5a1af171c9d5b5985fca733'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}1030abc4db405b5f2969643e99bc7435'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/content: content changed '{md5}b346ec0a8320f85f795bf612f6b02da7' to '{md5}1e1a80b34927c980a0411cf7e41d2054'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/mode: mode changed '0644' to '0640'", > " Total: 12", > " Success: 12", > " Total: 20", > " Out of sync: 9", > " Changed: 9", > " Config retrieval: 1.01", > " Total: 1.05", > " Last run: 1529673121", > " Config: 1529673120", > "Gathering files modified after 2018-06-22 13:11:51.542674679 +0000", > "2018-06-22 13:12:02,699 DEBUG: 33380 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/rabbitmq.origin_of_time", > "+ touch /var/lib/config-data/rabbitmq.origin_of_time", > "Warning: ModuleLoader: module 'rabbitmq' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/rabbitmq", > "++ stat -c %y /var/lib/config-data/rabbitmq.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:11:51.542674679 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/rabbitmq", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/rabbitmq", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/rabbitmq.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/rabbitmq --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/rabbitmq --mtime=1970-01-01", > "2018-06-22 13:12:02,699 INFO: 33380 -- Removing container: docker-puppet-rabbitmq", > "2018-06-22 13:12:02,766 DEBUG: 33380 -- docker-puppet-rabbitmq", > "2018-06-22 13:12:02,767 INFO: 33380 -- Finished processing puppet configs for rabbitmq", > "2018-06-22 13:12:02,767 INFO: 33380 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 13:12:02,767 DEBUG: 33380 -- config_volume neutron", > "2018-06-22 13:12:02,767 DEBUG: 33380 -- puppet_tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-06-22 13:12:02,767 DEBUG: 33380 -- manifest include tripleo::profile::base::neutron::server", > "include ::tripleo::profile::base::neutron::plugins::ml2", > "include tripleo::profile::base::neutron::dhcp", > "include tripleo::profile::base::neutron::l3", > "include tripleo::profile::base::neutron::metadata", > "include ::tripleo::profile::base::neutron::ovs", > "2018-06-22 13:12:02,767 DEBUG: 33380 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 13:12:02,767 DEBUG: 33380 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-06-22 13:12:02,767 INFO: 33380 -- Removing container: docker-puppet-neutron", > "2018-06-22 13:12:02,835 INFO: 33380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 13:12:06,985 DEBUG: 33380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server", > "ea1d509b6f44: Pulling fs layer", > "e9f9993bb931: Pulling fs layer", > "e9f9993bb931: Verifying Checksum", > "e9f9993bb931: Download complete", > "ea1d509b6f44: Verifying Checksum", > "ea1d509b6f44: Download complete", > "ea1d509b6f44: Pull complete", > "e9f9993bb931: Pull complete", > "Digest: sha256:af12594500608f07f8d38590e2c9b2983e5d81ae8b63aec042f36411b0e76adc", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 13:12:06,988 DEBUG: 33380 -- NET_HOST enabled", > "2018-06-22 13:12:06,988 DEBUG: 33380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp9owRGK:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", > "2018-06-22 13:12:09,874 DEBUG: 33381 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.29 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[database/metering_time_to_live]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/filter_project]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/archive_policy]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Dispatcher::Gnocchi/Ceilometer_config[dispatcher_gnocchi/resources_definition_file]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/File[event_pipeline]/ensure: defined content as '{md5}dafea5c96d5da5251f9b8a275c6d71aa'", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/Ceilometer_config[notification/ack_on_event_error]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.58 seconds", > " Total: 31", > " Success: 31", > " Total: 158", > " Out of sync: 31", > " Changed: 31", > " Skipped: 35", > " Ceilometer config: 0.48", > " Config retrieval: 1.52", > " Last run: 1529673128", > " Total: 2.00", > " Config: 1529673126", > "Gathering files modified after 2018-06-22 13:12:02.523700823 +0000", > "2018-06-22 13:12:09,875 DEBUG: 33381 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config /etc/config.pp", > "Warning: ModuleLoader: module 'ceilometer' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/agent/notification.pp\", 118]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer/agent/notification.pp\", 34]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:12:02.523700823 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/ceilometer --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-06-22 13:12:09,875 INFO: 33381 -- Removing container: docker-puppet-ceilometer", > "2018-06-22 13:12:09,908 DEBUG: 33381 -- docker-puppet-ceilometer", > "2018-06-22 13:12:09,908 INFO: 33381 -- Finished processing puppet configs for ceilometer", > "2018-06-22 13:12:12,840 DEBUG: 33379 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.58 seconds", > "Notice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_host]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}6bfb91ec3128b1252913d8ba04a9c38f'", > "Notice: /Stage[main]/Apache::Mod::Headers/Apache::Mod[headers]/File[headers.load]/ensure: defined content as '{md5}96094c96352002c43ada5bdf8650ff38'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[heat_api_cfn_wsgi]/ensure: defined content as '{md5}c3ae61ab87649c8cdfab8977da2b194b'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/Apache::Vhost[heat_api_cfn_wsgi]/Concat[10-heat_api_cfn_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_cfn_wsgi.conf]/ensure: defined content as '{md5}dec9ed78f8f4a5b645106fa3b8a3a776'", > "Notice: Applied catalog in 2.25 seconds", > " Total: 337", > " File: 0.20", > " Heat config: 1.38", > " Last run: 1529673131", > " Config retrieval: 4.08", > " Total: 5.71", > " Config: 1529673125", > "Gathering files modified after 2018-06-22 13:12:00.754696687 +0000", > "2018-06-22 13:12:12,840 DEBUG: 33379 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat_api_cfn.origin_of_time", > "+ touch /var/lib/config-data/heat_api_cfn.origin_of_time", > " with Stdlib::Compat::Integer. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/wsgi/apache_api_cfn.pp\", 125]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api_cfn", > "++ stat -c %y /var/lib/config-data/heat_api_cfn.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:12:00.754696687 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api_cfn", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api_cfn", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api_cfn.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat_api_cfn --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat_api_cfn --mtime=1970-01-01", > "2018-06-22 13:12:12,840 INFO: 33379 -- Removing container: docker-puppet-heat_api_cfn", > "2018-06-22 13:12:12,895 DEBUG: 33379 -- docker-puppet-heat_api_cfn", > "2018-06-22 13:12:12,896 INFO: 33379 -- Finished processing puppet configs for heat_api_cfn", > "2018-06-22 13:12:18,108 DEBUG: 33380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.98 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/endpoint_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/tenant_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_distributed]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/enable_dvr]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall_rule]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network_gateway]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_packet_filter]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/force_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_dns_servers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/agent_mode]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_password]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_userid]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/rabbit_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Oslo::Middleware[neutron_config]/Neutron_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 1.46 seconds", > " Total: 107", > " Success: 107", > " Changed: 107", > " Out of sync: 107", > " Total: 359", > " Skipped: 44", > " Neutron api config: 0.00", > " Neutron agent ovs: 0.01", > " Neutron l3 agent config: 0.01", > " Neutron metadata agent config: 0.02", > " Neutron plugin ml2: 0.02", > " Neutron dhcp agent config: 0.08", > " Neutron config: 1.05", > " Last run: 1529673136", > " Config retrieval: 3.33", > " Total: 4.58", > " Config: 1529673132", > "Gathering files modified after 2018-06-22 13:12:07.165711544 +0000", > "2018-06-22 13:12:18,109 DEBUG: 33380 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "Warning: Scope(Class[Neutron]): neutron::rabbit_host, neutron::rabbit_hosts, neutron::rabbit_password, neutron::rabbit_port, neutron::rabbit_user, neutron::rabbit_virtual_host and neutron::rpc_backend are deprecated. Please use neutron::default_transport_url instead.", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 530]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/server.pp\", 104]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 132]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/db.pp\", 69]:[\"/etc/puppet/modules/neutron/manifests/server.pp\", 315]", > "Warning: Scope(Class[Neutron::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: '::neutron::params::metadata_agent_package'. at /etc/puppet/modules/neutron/manifests/agents/metadata.pp:122:6", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 219]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:12:07.165711544 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/neutron --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-06-22 13:12:18,109 INFO: 33380 -- Removing container: docker-puppet-neutron", > "2018-06-22 13:12:18,150 DEBUG: 33380 -- docker-puppet-neutron", > "2018-06-22 13:12:18,150 INFO: 33380 -- Finished processing puppet configs for neutron", > "2018-06-22 13:12:18,150 INFO: 33380 -- Starting configuration of horizon using image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-22 13:12:18,150 DEBUG: 33380 -- config_volume horizon", > "2018-06-22 13:12:18,150 DEBUG: 33380 -- puppet_tags file,file_line,concat,augeas,cron,horizon_config", > "2018-06-22 13:12:18,150 DEBUG: 33380 -- manifest include ::tripleo::profile::base::horizon", > "2018-06-22 13:12:18,151 DEBUG: 33380 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-22 13:12:18,151 DEBUG: 33380 -- volumes []", > "2018-06-22 13:12:18,151 INFO: 33380 -- Removing container: docker-puppet-horizon", > "2018-06-22 13:12:18,206 INFO: 33380 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-22 13:12:23,128 DEBUG: 33380 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-horizon ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-horizon", > "76e0e41ffb2e: Pulling fs layer", > "76e0e41ffb2e: Verifying Checksum", > "76e0e41ffb2e: Download complete", > "76e0e41ffb2e: Pull complete", > "Digest: sha256:985bc1250661a931ac3368fe39a6651116c123db6c18789bfdb7da2c61741b0d", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-22 13:12:23,131 DEBUG: 33380 -- NET_HOST enabled", > "2018-06-22 13:12:23,131 DEBUG: 33380 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-horizon --env PUPPET_TAGS=file,file_line,concat,augeas,cron,horizon_config --env NAME=horizon --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpZ2ehsw:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", > "2018-06-22 13:12:32,001 DEBUG: 33380 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.00 seconds", > "Notice: /Stage[main]/Apache::Mod::Remoteip/File[remoteip.conf]/ensure: defined content as '{md5}5e70f28d6cca0d978242202de6e8e0e3'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon]/mode: mode changed '0750' to '0751'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon/horizon.log]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}05a4d6cbec792391f771b5d1a68687d9'", > "Notice: /Stage[main]/Apache::Mod::Remoteip/Apache::Mod[remoteip]/File[remoteip.load]/ensure: defined content as '{md5}118eb7518a1d018a162d23dfe32c4bad'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/content: content changed '{md5}601e633104479c5b9ee828b4bae911ac' to '{md5}4fe0349dab6bd1d72bdf0b99a86ce08e'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/owner: owner changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/group: group changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/etc/httpd/conf.d/openstack-dashboard.conf]/content: content changed '{md5}4cb4b1391d3553951208fad1ce791e5c' to '{md5}3f4b1c53d0e150dae37b3ee5dcaf622d'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[10-horizon_vhost.conf]/File[/etc/httpd/conf.d/10-horizon_vhost.conf]/ensure: defined content as '{md5}bc5cb3b80367d89e79e323750fcbb4f0'", > "Notice: Applied catalog in 0.63 seconds", > " Total: 86", > " Success: 86", > " Total: 172", > " Out of sync: 84", > " Changed: 84", > " File: 0.19", > " Last run: 1529673151", > " Total: 2.53", > " Config: 1529673148", > "Gathering files modified after 2018-06-22 13:12:23.305747341 +0000", > "2018-06-22 13:12:32,001 DEBUG: 33380 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,horizon_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,horizon_config'", > "+ origin_of_time=/var/lib/config-data/horizon.origin_of_time", > "+ touch /var/lib/config-data/horizon.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,horizon_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/horizon.pp\", 97]:[\"/etc/config.pp\", 2]", > "Warning: ModuleLoader: module 'horizon' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Undefined variable ''; ", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 559]:[\"/etc/config.pp\", 2]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 560]:[\"/etc/config.pp\", 2]", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 562]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/horizon", > "++ stat -c %y /var/lib/config-data/horizon.origin_of_time", > "+ echo 'Gathering files modified after 2018-06-22 13:12:23.305747341 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/horizon", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/horizon", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/horizon.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/horizon --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/horizon --mtime=1970-01-01", > "2018-06-22 13:12:32,001 INFO: 33380 -- Removing container: docker-puppet-horizon", > "2018-06-22 13:12:32,048 DEBUG: 33380 -- docker-puppet-horizon", > "2018-06-22 13:12:32,048 INFO: 33380 -- Finished processing puppet configs for horizon", > "2018-06-22 13:12:32,049 DEBUG: 33378 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-06-22 13:12:32,049 DEBUG: 33378 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-06-22 13:12:32,052 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/memcached/etc/sysconfig.md5sum for config_volume /var/lib/config-data/memcached/etc/sysconfig", > "2018-06-22 13:12:32,052 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-22 13:12:32,052 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-22 13:12:32,052 DEBUG: 33378 -- Updating config hash for mysql_bootstrap, config_volume=heat_api_cfn hash=3d0d90fbc91e503875356f69c121b5d6", > "2018-06-22 13:12:32,052 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-22 13:12:32,052 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-22 13:12:32,053 DEBUG: 33378 -- Updating config hash for rabbitmq_bootstrap, config_volume=heat_api_cfn hash=4cfc58610a6ee8abac132483d008d519", > "2018-06-22 13:12:32,053 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/memcached/etc/sysconfig.md5sum for config_volume /var/lib/config-data/memcached/etc/sysconfig", > "2018-06-22 13:12:32,055 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-06-22 13:12:32,055 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-06-22 13:12:32,055 DEBUG: 33378 -- Updating config hash for nova_placement, config_volume=heat_api_cfn hash=cb9132c83fe00c38e2a3e1886a257011", > "2018-06-22 13:12:32,055 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-22 13:12:32,055 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-22 13:12:32,055 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/heat/etc/heat.md5sum for config_volume /var/lib/config-data/heat/etc/heat", > "2018-06-22 13:12:32,055 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/heat/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/heat/etc/my.cnf.d", > "2018-06-22 13:12:32,056 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data.md5sum for config_volume /var/lib/config-data", > "2018-06-22 13:12:32,056 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/swift/etc", > "2018-06-22 13:12:32,056 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-22 13:12:32,056 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-22 13:12:32,056 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-22 13:12:32,056 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-22 13:12:32,056 DEBUG: 33378 -- Updating config hash for keystone_cron, config_volume=heat_api_cfn hash=7135538464b9fb81b82cfbf7b53e06e1", > "2018-06-22 13:12:32,056 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/panko/etc.md5sum for config_volume /var/lib/config-data/panko/etc", > "2018-06-22 13:12:32,056 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/panko/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/panko/etc/my.cnf.d", > "2018-06-22 13:12:32,057 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-22 13:12:32,057 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-22 13:12:32,057 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-06-22 13:12:32,057 DEBUG: 33378 -- Updating config hash for keystone_db_sync, config_volume=heat_api_cfn hash=7135538464b9fb81b82cfbf7b53e06e1", > "2018-06-22 13:12:32,057 DEBUG: 33378 -- Updating config hash for keystone, config_volume=heat_api_cfn hash=7135538464b9fb81b82cfbf7b53e06e1", > "2018-06-22 13:12:32,057 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/aodh/etc/aodh.md5sum for config_volume /var/lib/config-data/aodh/etc/aodh", > "2018-06-22 13:12:32,057 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/aodh/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/aodh/etc/my.cnf.d", > "2018-06-22 13:12:32,057 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:12:32,057 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:12:32,058 DEBUG: 33378 -- Updating config hash for neutron_ovs_bridge, config_volume=heat_api_cfn hash=1458ccfb2d6aca5d6f994c0721e6e0a6", > "2018-06-22 13:12:32,058 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/cinder/etc/cinder.md5sum for config_volume /var/lib/config-data/cinder/etc/cinder", > "2018-06-22 13:12:32,058 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/cinder/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/cinder/etc/my.cnf.d", > "2018-06-22 13:12:32,058 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-22 13:12:32,058 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-22 13:12:32,058 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-22 13:12:32,058 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-22 13:12:32,058 DEBUG: 33378 -- Updating config hash for glance_api_db_sync, config_volume=heat_api_cfn hash=ce635a7b60e8e89d9f8a6130e0a31be1", > "2018-06-22 13:12:32,058 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/neutron/etc.md5sum for config_volume /var/lib/config-data/neutron/etc", > "2018-06-22 13:12:32,058 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/neutron/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/neutron/etc/my.cnf.d", > "2018-06-22 13:12:32,058 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/neutron/usr/share.md5sum for config_volume /var/lib/config-data/neutron/usr/share", > "2018-06-22 13:12:32,058 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/sahara/etc/sahara.md5sum for config_volume /var/lib/config-data/sahara/etc/sahara", > "2018-06-22 13:12:32,059 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-06-22 13:12:32,059 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-06-22 13:12:32,059 DEBUG: 33378 -- Updating config hash for horizon, config_volume=heat_api_cfn hash=01eaa54e33f1ab9626f72cb20288172d", > "2018-06-22 13:12:32,060 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-06-22 13:12:32,061 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-06-22 13:12:32,061 DEBUG: 33378 -- Updating config hash for clustercheck, config_volume=heat_api_cfn hash=75dd38d4613c9ab710ec801025de1f50", > "2018-06-22 13:12:32,061 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-22 13:12:32,061 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-06-22 13:12:32,061 DEBUG: 33378 -- Updating config hash for mysql_restart_bundle, config_volume=heat_api_cfn hash=3d0d90fbc91e503875356f69c121b5d6", > "2018-06-22 13:12:32,061 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-06-22 13:12:32,061 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-06-22 13:12:32,061 DEBUG: 33378 -- Updating config hash for haproxy_restart_bundle, config_volume=heat_api_cfn hash=819c2c449f0801f24d554f23abe33b2b", > "2018-06-22 13:12:32,061 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-22 13:12:32,061 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-06-22 13:12:32,061 DEBUG: 33378 -- Updating config hash for rabbitmq_restart_bundle, config_volume=heat_api_cfn hash=4cfc58610a6ee8abac132483d008d519", > "2018-06-22 13:12:32,062 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon/etc", > "2018-06-22 13:12:32,062 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-06-22 13:12:32,062 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-06-22 13:12:32,062 DEBUG: 33378 -- Updating config hash for redis_restart_bundle, config_volume=heat_api_cfn hash=0b60eeb5d101188bb85471a93263935c", > "2018-06-22 13:12:32,064 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 13:12:32,064 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 13:12:32,064 DEBUG: 33378 -- Updating config hash for cinder_volume_restart_bundle, config_volume=heat_api_cfn hash=484345728ba647c8391695dfa3790b3d", > "2018-06-22 13:12:32,064 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-22 13:12:32,064 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-22 13:12:32,064 DEBUG: 33378 -- Updating config hash for gnocchi_statsd, config_volume=heat_api_cfn hash=d5d5bb348d5143d33909ba017cca92ca", > "2018-06-22 13:12:32,064 DEBUG: 33378 -- Updating config hash for cinder_backup_restart_bundle, config_volume=heat_api_cfn hash=484345728ba647c8391695dfa3790b3d", > "2018-06-22 13:12:32,065 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-22 13:12:32,065 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-22 13:12:32,065 DEBUG: 33378 -- Updating config hash for gnocchi_metricd, config_volume=heat_api_cfn hash=d5d5bb348d5143d33909ba017cca92ca", > "2018-06-22 13:12:32,065 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-06-22 13:12:32,065 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-06-22 13:12:32,065 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/ceilometer/etc/ceilometer.md5sum for config_volume /var/lib/config-data/ceilometer/etc/ceilometer", > "2018-06-22 13:12:32,065 DEBUG: 33378 -- Updating config hash for gnocchi_api, config_volume=heat_api_cfn hash=d5d5bb348d5143d33909ba017cca92ca", > "2018-06-22 13:12:32,067 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,067 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,067 DEBUG: 33378 -- Updating config hash for swift_container_updater, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,068 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 13:12:32,068 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 13:12:32,068 DEBUG: 33378 -- Updating config hash for aodh_evaluator, config_volume=heat_api_cfn hash=50b2e72486b0ea957bb6c2b4de67a283", > "2018-06-22 13:12:32,068 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 13:12:32,068 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 13:12:32,068 DEBUG: 33378 -- Updating config hash for nova_scheduler, config_volume=heat_api_cfn hash=9fc8f91a60564752f1bf79e5222f63f1", > "2018-06-22 13:12:32,068 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,068 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,068 DEBUG: 33378 -- Updating config hash for swift_object_server, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,068 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 13:12:32,068 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 13:12:32,068 DEBUG: 33378 -- Updating config hash for cinder_api, config_volume=heat_api_cfn hash=484345728ba647c8391695dfa3790b3d", > "2018-06-22 13:12:32,069 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,069 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,069 DEBUG: 33378 -- Updating config hash for swift_proxy, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,069 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:12:32,069 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:12:32,069 DEBUG: 33378 -- Updating config hash for neutron_dhcp, config_volume=heat_api_cfn hash=1458ccfb2d6aca5d6f994c0721e6e0a6", > "2018-06-22 13:12:32,069 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-22 13:12:32,069 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-22 13:12:32,069 DEBUG: 33378 -- Updating config hash for heat_api, config_volume=heat_api_cfn hash=b7558db0b507d4b29c09e8d998f4021a", > "2018-06-22 13:12:32,069 DEBUG: 33378 -- Updating config hash for swift_object_auditor, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,070 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:12:32,070 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:12:32,070 DEBUG: 33378 -- Updating config hash for neutron_metadata_agent, config_volume=heat_api_cfn hash=1458ccfb2d6aca5d6f994c0721e6e0a6", > "2018-06-22 13:12:32,070 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-22 13:12:32,070 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-22 13:12:32,070 DEBUG: 33378 -- Updating config hash for ceilometer_agent_central, config_volume=heat_api_cfn hash=e84a4388c67bb2db7836ae48b22ed7e8", > "2018-06-22 13:12:32,070 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,070 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,070 DEBUG: 33378 -- Updating config hash for swift_account_replicator, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,070 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 13:12:32,070 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 13:12:32,070 DEBUG: 33378 -- Updating config hash for aodh_notifier, config_volume=heat_api_cfn hash=50b2e72486b0ea957bb6c2b4de67a283", > "2018-06-22 13:12:32,070 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 13:12:32,071 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 13:12:32,071 DEBUG: 33378 -- Updating config hash for nova_api_cron, config_volume=heat_api_cfn hash=9fc8f91a60564752f1bf79e5222f63f1", > "2018-06-22 13:12:32,071 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 13:12:32,071 DEBUG: 33378 -- Updating config hash for nova_consoleauth, config_volume=heat_api_cfn hash=9fc8f91a60564752f1bf79e5222f63f1", > "2018-06-22 13:12:32,071 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-22 13:12:32,071 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-06-22 13:12:32,071 DEBUG: 33378 -- Updating config hash for gnocchi_db_sync, config_volume=heat_api_cfn hash=d5d5bb348d5143d33909ba017cca92ca", > "2018-06-22 13:12:32,071 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,071 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,071 DEBUG: 33378 -- Updating config hash for swift_account_reaper, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,071 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-22 13:12:32,072 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-06-22 13:12:32,072 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-22 13:12:32,072 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-22 13:12:32,072 DEBUG: 33378 -- Updating config hash for ceilometer_agent_notification, config_volume=heat_api_cfn hash=e84a4388c67bb2db7836ae48b22ed7e8-6ca97cbcc5c8cb7ad7eb96ca2954a742", > "2018-06-22 13:12:32,072 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 13:12:32,072 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 13:12:32,072 DEBUG: 33378 -- Updating config hash for nova_vnc_proxy, config_volume=heat_api_cfn hash=9fc8f91a60564752f1bf79e5222f63f1", > "2018-06-22 13:12:32,072 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,072 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,072 DEBUG: 33378 -- Updating config hash for swift_rsync, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,073 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 13:12:32,073 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 13:12:32,073 DEBUG: 33378 -- Updating config hash for nova_api, config_volume=heat_api_cfn hash=9fc8f91a60564752f1bf79e5222f63f1", > "2018-06-22 13:12:32,073 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 13:12:32,073 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 13:12:32,073 DEBUG: 33378 -- Updating config hash for aodh_api, config_volume=heat_api_cfn hash=50b2e72486b0ea957bb6c2b4de67a283", > "2018-06-22 13:12:32,073 DEBUG: 33378 -- Updating config hash for nova_metadata, config_volume=heat_api_cfn hash=9fc8f91a60564752f1bf79e5222f63f1", > "2018-06-22 13:12:32,073 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-06-22 13:12:32,073 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-06-22 13:12:32,073 DEBUG: 33378 -- Updating config hash for heat_engine, config_volume=heat_api_cfn hash=f0a0e4ccd7ae8ba492f9ca31c172ea39", > "2018-06-22 13:12:32,073 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,074 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,074 DEBUG: 33378 -- Updating config hash for swift_container_server, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,074 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,074 DEBUG: 33378 -- Updating config hash for swift_object_replicator, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,074 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:12:32,074 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:12:32,074 DEBUG: 33378 -- Updating config hash for neutron_l3_agent, config_volume=heat_api_cfn hash=1458ccfb2d6aca5d6f994c0721e6e0a6", > "2018-06-22 13:12:32,074 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 13:12:32,074 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 13:12:32,074 DEBUG: 33378 -- Updating config hash for cinder_scheduler, config_volume=heat_api_cfn hash=484345728ba647c8391695dfa3790b3d", > "2018-06-22 13:12:32,074 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 13:12:32,074 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-06-22 13:12:32,075 DEBUG: 33378 -- Updating config hash for nova_conductor, config_volume=heat_api_cfn hash=9fc8f91a60564752f1bf79e5222f63f1", > "2018-06-22 13:12:32,075 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-06-22 13:12:32,075 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-06-22 13:12:32,075 DEBUG: 33378 -- Updating config hash for heat_api_cfn, config_volume=heat_api_cfn hash=5b498dbdcb57c061f014b4a5ee807911", > "2018-06-22 13:12:32,075 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-06-22 13:12:32,075 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-06-22 13:12:32,075 DEBUG: 33378 -- Updating config hash for sahara_api, config_volume=heat_api_cfn hash=b6f5b6cd3b26a22dbc1456b85ee3cf24", > "2018-06-22 13:12:32,075 DEBUG: 33378 -- Updating config hash for sahara_engine, config_volume=heat_api_cfn hash=b6f5b6cd3b26a22dbc1456b85ee3cf24", > "2018-06-22 13:12:32,075 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:12:32,075 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:12:32,075 DEBUG: 33378 -- Updating config hash for neutron_ovs_agent, config_volume=heat_api_cfn hash=1458ccfb2d6aca5d6f994c0721e6e0a6", > "2018-06-22 13:12:32,076 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 13:12:32,076 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-06-22 13:12:32,076 DEBUG: 33378 -- Updating config hash for cinder_api_cron, config_volume=heat_api_cfn hash=484345728ba647c8391695dfa3790b3d", > "2018-06-22 13:12:32,076 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,076 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,076 DEBUG: 33378 -- Updating config hash for swift_account_auditor, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,076 DEBUG: 33378 -- Updating config hash for swift_container_replicator, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,076 DEBUG: 33378 -- Updating config hash for swift_object_updater, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,076 DEBUG: 33378 -- Updating config hash for swift_object_expirer, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Updating config hash for heat_api_cron, config_volume=heat_api_cfn hash=b7558db0b507d4b29c09e8d998f4021a", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Updating config hash for swift_container_auditor, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Updating config hash for panko_api, config_volume=heat_api_cfn hash=6ca97cbcc5c8cb7ad7eb96ca2954a742", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Updating config hash for aodh_listener, config_volume=heat_api_cfn hash=50b2e72486b0ea957bb6c2b4de67a283", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-06-22 13:12:32,077 DEBUG: 33378 -- Updating config hash for neutron_api, config_volume=heat_api_cfn hash=1458ccfb2d6aca5d6f994c0721e6e0a6", > "2018-06-22 13:12:32,078 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,078 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-06-22 13:12:32,078 DEBUG: 33378 -- Updating config hash for swift_account_server, config_volume=heat_api_cfn hash=63d45f8cab783073348e122d879c162d", > "2018-06-22 13:12:32,078 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-22 13:12:32,078 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-06-22 13:12:32,078 DEBUG: 33378 -- Updating config hash for glance_api, config_volume=heat_api_cfn hash=ce635a7b60e8e89d9f8a6130e0a31be1", > "2018-06-22 13:12:32,078 DEBUG: 33378 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-22 13:12:32,078 DEBUG: 33378 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-06-22 13:12:32,078 DEBUG: 33378 -- Updating config hash for logrotate_crond, config_volume=heat_api_cfn hash=a6aaaf5320a3a22a11e8dfe3cfa9d954" > ] >} >2018-06-22 09:12:33,379 p=21516 u=mistral | TASK [Start containers for step 1] ********************************************* >2018-06-22 09:12:34,130 p=21516 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:12:34,146 p=21516 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:13:02,497 p=21516 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:13:02,522 p=21516 u=mistral | TASK [Debug output for task which failed: Start containers for step 1] ********* >2018-06-22 09:13:02,582 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-backup ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-backup", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "5e7b63a88a76: Already exists", > "89c035649aaf: Pulling fs layer", > "89c035649aaf: Verifying Checksum", > "89c035649aaf: Download complete", > "89c035649aaf: Pull complete", > "Digest: sha256:bbd94b3a8477e286264ef2b5660a8c60d872d945e37c6023ae19c6dd09ea156f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-volume ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-volume", > "606ec38d3d26: Pulling fs layer", > "606ec38d3d26: Verifying Checksum", > "606ec38d3d26: Download complete", > "606ec38d3d26: Pull complete", > "Digest: sha256:d4d518ef6aad7c077ff97a0ad1de70ef4074ace3ddde85fdfb70e12e63891ea5", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", > "stdout: ", > "stdout: d207eee431ddd2049f7693b755197f1de5fa11507b824f6a8072a79bd7c0b567", > "stdout: Installing MariaDB/MySQL system tables in '/var/lib/mysql' ...", > "OK", > "Filling help tables...", > "Creating OpenGIS required SP-s...", > "To start mysqld at boot time you have to copy", > "support-files/mysql.server to the right place for your system", > "PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !", > "To do so, start the server, then issue the following commands:", > "'/usr/bin/mysqladmin' -u root password 'new-password'", > "'/usr/bin/mysqladmin' -u root -h controller-0 password 'new-password'", > "Alternatively you can run:", > "'/usr/bin/mysql_secure_installation'", > "which will also give you the option of removing the test", > "databases and anonymous user created by default. This is", > "strongly recommended for production servers.", > "See the MariaDB Knowledgebase at http://mariadb.com/kb or the", > "MySQL manual for more instructions.", > "You can start the MariaDB daemon with:", > "cd '/usr' ; /usr/bin/mysqld_safe --datadir='/var/lib/mysql'", > "You can test the MariaDB daemon with mysql-test-run.pl", > "cd '/usr/mysql-test' ; perl mysql-test-run.pl", > "Please report any problems at http://mariadb.org/jira", > "The latest information about MariaDB is available at http://mariadb.org/.", > "You can find additional information about the MySQL part at:", > "http://dev.mysql.com", > "Consider joining MariaDB's strong and vibrant community:", > "https://mariadb.org/get-involved/", > "180622 13:12:53 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "180622 13:12:53 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "spawn mysql_secure_installation", > "NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB", > " SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!", > "In order to log into MariaDB to secure it, we'll need the current", > "password for the root user. If you've just installed MariaDB, and", > "you haven't set the root password yet, the password will be blank,", > "so you should just press enter here.", > "Enter current password for root (enter for none): ", > "OK, successfully used password, moving on...", > "Setting the root password ensures that nobody can log into the MariaDB", > "root user without the proper authorisation.", > "Set root password? [Y/n] y", > "New password: ", > "Re-enter new password: ", > "Password updated successfully!", > "Reloading privilege tables..", > " ... Success!", > "By default, a MariaDB installation has an anonymous user, allowing anyone", > "to log into MariaDB without having to have a user account created for", > "them. This is intended only for testing, and to make the installation", > "go a bit smoother. You should remove them before moving into a", > "production environment.", > "Remove anonymous users? [Y/n] y", > "Normally, root should only be allowed to connect from 'localhost'. This", > "ensures that someone cannot guess at the root password from the network.", > "Disallow root login remotely? [Y/n] n", > " ... skipping.", > "By default, MariaDB comes with a database named 'test' that anyone can", > "access. This is also intended only for testing, and should be removed", > "before moving into a production environment.", > "Remove test database and access to it? [Y/n] y", > " - Dropping test database...", > " - Removing privileges on test database...", > "Reloading the privilege tables will ensure that all changes made so far", > "will take effect immediately.", > "Reload privilege tables now? [Y/n] y", > "Cleaning up...", > "All done! If you've completed all of the above steps, your MariaDB", > "installation should now be secure.", > "Thanks for using MariaDB!", > "180622 13:12:56 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "180622 13:12:57 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "180622 13:12:57 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "mysqld is alive", > "180622 13:13:00 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "stderr: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Copying /dev/null to /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Setting permission for /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Deleting /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/galera.cnf to /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/sysconfig/clustercheck to /etc/sysconfig/clustercheck", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/root/.my.cnf to /root/.my.cnf", > "INFO:__main__:Writing out command to execute", > "2018-06-22 13:12:40 140456647645376 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-06-22 13:12:40 140456647645376 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 42 ...", > "2018-06-22 13:12:45 139819520870592 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-06-22 13:12:45 139819520870592 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 71 ...", > "2018-06-22 13:12:49 140347896002752 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-06-22 13:12:49 140347896002752 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 101 ...", > "/usr/bin/mysqld_safe: line 755: ulimit: -1: invalid option", > "ulimit: usage: ulimit [-SHacdefilmnpqrstuvx] [limit]", > "stdout: 9b5cf75acf0e2941631f1c38d9a47517d340ecfb9117b94a451e7a9716ad1b67" > ] >} >2018-06-22 09:13:02,603 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-22 09:13:02,619 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-22 09:13:02,641 p=21516 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks1.json exists] ******** >2018-06-22 09:13:03,032 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:13:03,043 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:13:03,056 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:13:03,080 p=21516 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 1] ******************** >2018-06-22 09:13:03,106 p=21516 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:13:03,128 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:13:03,138 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:13:03,159 p=21516 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 1] *** >2018-06-22 09:13:03,185 p=21516 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:13:03,207 p=21516 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:13:03,224 p=21516 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:13:03,229 p=21516 u=mistral | PLAY [External deployment step 2] ********************************************** >2018-06-22 09:13:03,250 p=21516 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-06-22 09:13:03,267 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:13:03,284 p=21516 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-06-22 09:13:03,307 p=21516 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-06-22 09:13:03,311 p=21516 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-06-22 09:13:03,314 p=21516 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-06-22 09:13:03,332 p=21516 u=mistral | TASK [generate inventory] ****************************************************** >2018-06-22 09:13:03,349 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:13:03,365 p=21516 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-06-22 09:13:03,383 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:13:03,399 p=21516 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-06-22 09:13:03,414 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:13:03,430 p=21516 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-06-22 09:13:03,445 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:13:03,462 p=21516 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-06-22 09:13:03,476 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:13:03,493 p=21516 u=mistral | TASK [generate collect nodes uuid playbook] ************************************ >2018-06-22 09:13:03,508 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:13:03,525 p=21516 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-06-22 09:13:03,552 p=21516 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_playbook_verbosity": 2}, "changed": false} >2018-06-22 09:13:03,568 p=21516 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-06-22 09:13:03,596 p=21516 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_command": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_RETRY_FILES_ENABLED=False ANSIBLE_SSH_RETRIES=3 ANSIBLE_HOST_KEY_CHECKING=False DEFAULT_FORKS=25 ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ansible-playbook --private-key /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/extra_vars.yml"}, "changed": false} >2018-06-22 09:13:03,612 p=21516 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-06-22 09:17:42,272 p=21516 u=mistral | changed: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": true, "cmd": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_RETRY_FILES_ENABLED=False ANSIBLE_SSH_RETRIES=3 ANSIBLE_HOST_KEY_CHECKING=False DEFAULT_FORKS=25 ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ansible-playbook --private-key /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/extra_vars.yml /usr/share/ceph-ansible/site-docker.yml.sample", "delta": "0:04:38.278204", "end": "2018-06-22 09:17:42.053157", "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "rc": 0, "start": "2018-06-22 09:13:03.774953", "stderr": "[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use \n'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. \nThis feature will be removed in a future release. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: docker is kept for backwards compatibility but usage is \ndiscouraged. The module documentation details page may explain more about this \nrationale.. This feature will be removed in a future release. Deprecation \nwarnings can be disabled by setting deprecation_warnings=False in ansible.cfg.\n [WARNING]: Could not match supplied host pattern, ignoring: agents\n [WARNING]: Could not match supplied host pattern, ignoring: mdss\n [WARNING]: Could not match supplied host pattern, ignoring: rgws\n [WARNING]: Could not match supplied host pattern, ignoring: nfss\n [WARNING]: Could not match supplied host pattern, ignoring: restapis\n [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors\n [WARNING]: Could not match supplied host pattern, ignoring: iscsigws\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ inventory_hostname ==\ngroups[mon_group_name][0] }}\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ inventory_hostname ==\ngroups[mon_group_name][0] }}\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0\n}}\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0\n}}\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.", "stderr_lines": ["[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use ", "'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. ", "This feature will be removed in a future release. Deprecation warnings can be ", "disabled by setting deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: docker is kept for backwards compatibility but usage is ", "discouraged. The module documentation details page may explain more about this ", "rationale.. This feature will be removed in a future release. Deprecation ", "warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.", " [WARNING]: Could not match supplied host pattern, ignoring: agents", " [WARNING]: Could not match supplied host pattern, ignoring: mdss", " [WARNING]: Could not match supplied host pattern, ignoring: rgws", " [WARNING]: Could not match supplied host pattern, ignoring: nfss", " [WARNING]: Could not match supplied host pattern, ignoring: restapis", " [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors", " [WARNING]: Could not match supplied host pattern, ignoring: iscsigws", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ inventory_hostname ==", "groups[mon_group_name][0] }}", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ inventory_hostname ==", "groups[mon_group_name][0] }}", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0", "}}", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0", "}}", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg."], "stdout": "ansible-playbook 2.5.4\n config file = /usr/share/ceph-ansible/ansible.cfg\n configured module search path = [u'/usr/share/ceph-ansible/library']\n ansible python module location = /usr/lib/python2.7/site-packages/ansible\n executable location = /usr/bin/ansible-playbook\n python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]\nUsing /usr/share/ceph-ansible/ansible.cfg as config file\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml\n\nPLAYBOOK: site-docker.yml.sample ***********************************************\n12 plays in /usr/share/ceph-ansible/site-docker.yml.sample\n\nPLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,mgrs] ***\n\nTASK [gather facts] ************************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:24\nFriday 22 June 2018 09:13:06 -0400 (0:00:00.191) 0:00:00.191 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [gather and delegate facts] ***********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:29\nFriday 22 June 2018 09:13:06 -0400 (0:00:00.075) 0:00:00.267 *********** \nok: [controller-0 -> 192.168.24.15] => (item=compute-0)\nok: [controller-0 -> 192.168.24.8] => (item=controller-0)\nok: [controller-0 -> 192.168.24.10] => (item=ceph-0)\n\nTASK [check if it is atomic host] **********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:38\nFriday 22 June 2018 09:13:15 -0400 (0:00:08.624) 0:00:08.892 *********** \nok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [set_fact is_atomic] ******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:45\nFriday 22 June 2018 09:13:15 -0400 (0:00:00.745) 0:00:09.638 *********** \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nTASK [pull rhceph image] *******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:66\nFriday 22 June 2018 09:13:16 -0400 (0:00:00.237) 0:00:09.876 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [set ceph monitor install 'In Progress'] **********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:76\nFriday 22 June 2018 09:13:16 -0400 (0:00:00.103) 0:00:09.979 *********** \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"start\": \"20180622091316Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nFriday 22 June 2018 09:13:16 -0400 (0:00:00.243) 0:00:10.223 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.030480\", \"end\": \"2018-06-22 13:13:17.212568\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:13:17.182088\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nFriday 22 June 2018 09:13:17 -0400 (0:00:00.753) 0:00:10.976 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nFriday 22 June 2018 09:13:17 -0400 (0:00:00.050) 0:00:11.027 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nFriday 22 June 2018 09:13:17 -0400 (0:00:00.046) 0:00:11.074 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nFriday 22 June 2018 09:13:17 -0400 (0:00:00.046) 0:00:11.120 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.030906\", \"end\": \"2018-06-22 13:13:17.928958\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:13:17.898052\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nFriday 22 June 2018 09:13:17 -0400 (0:00:00.574) 0:00:11.695 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nFriday 22 June 2018 09:13:17 -0400 (0:00:00.049) 0:00:11.744 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.046) 0:00:11.791 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.046) 0:00:11.838 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.048) 0:00:11.887 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.047) 0:00:11.934 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.047) 0:00:11.981 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.044) 0:00:12.026 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.045) 0:00:12.072 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.045) 0:00:12.117 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.052) 0:00:12.170 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.045) 0:00:12.216 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.047) 0:00:12.263 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.047) 0:00:12.310 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.047) 0:00:12.357 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.047) 0:00:12.405 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.048) 0:00:12.453 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.047) 0:00:12.501 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.046) 0:00:12.547 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.044) 0:00:12.592 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.045) 0:00:12.637 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.053) 0:00:12.690 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nFriday 22 June 2018 09:13:18 -0400 (0:00:00.050) 0:00:12.741 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nFriday 22 June 2018 09:13:19 -0400 (0:00:00.046) 0:00:12.787 *********** \nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nFriday 22 June 2018 09:13:19 -0400 (0:00:00.532) 0:00:13.320 *********** \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nFriday 22 June 2018 09:13:19 -0400 (0:00:00.079) 0:00:13.399 *********** \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nFriday 22 June 2018 09:13:19 -0400 (0:00:00.075) 0:00:13.474 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nFriday 22 June 2018 09:13:19 -0400 (0:00:00.070) 0:00:13.544 *********** \nok: [controller-0 -> 192.168.24.8] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nFriday 22 June 2018 09:13:19 -0400 (0:00:00.144) 0:00:13.689 *********** \nok: [controller-0 -> 192.168.24.8] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.030274\", \"end\": \"2018-06-22 13:13:20.489487\", \"failed_when_result\": false, \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-06-22 13:13:20.459213\", \"stderr\": \"Error response from daemon: No such container: ceph-mon-controller-0\", \"stderr_lines\": [\"Error response from daemon: No such container: ceph-mon-controller-0\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check if /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nFriday 22 June 2018 09:13:20 -0400 (0:00:00.569) 0:00:14.259 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nFriday 22 June 2018 09:13:20 -0400 (0:00:00.192) 0:00:14.451 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nFriday 22 June 2018 09:13:20 -0400 (0:00:00.048) 0:00:14.499 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nFriday 22 June 2018 09:13:21 -0400 (0:00:00.428) 0:00:14.927 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nFriday 22 June 2018 09:13:21 -0400 (0:00:00.044) 0:00:14.971 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85\nFriday 22 June 2018 09:13:21 -0400 (0:00:00.071) 0:00:15.043 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96\nFriday 22 June 2018 09:13:21 -0400 (0:00:00.043) 0:00:15.087 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105\nFriday 22 June 2018 09:13:21 -0400 (0:00:00.049) 0:00:15.136 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117\nFriday 22 June 2018 09:13:21 -0400 (0:00:00.041) 0:00:15.178 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123\nFriday 22 June 2018 09:13:21 -0400 (0:00:00.040) 0:00:15.218 *********** \nok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129\nFriday 22 June 2018 09:13:21 -0400 (0:00:00.071) 0:00:15.290 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135\nFriday 22 June 2018 09:13:21 -0400 (0:00:00.042) 0:00:15.332 *********** \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nFriday 22 June 2018 09:13:21 -0400 (0:00:00.175) 0:00:15.508 *********** \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nFriday 22 June 2018 09:13:21 -0400 (0:00:00.174) 0:00:15.683 *********** \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nFriday 22 June 2018 09:13:22 -0400 (0:00:00.184) 0:00:15.868 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166\nFriday 22 June 2018 09:13:22 -0400 (0:00:00.051) 0:00:15.919 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175\nFriday 22 June 2018 09:13:22 -0400 (0:00:00.051) 0:00:15.970 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183\nFriday 22 June 2018 09:13:22 -0400 (0:00:00.052) 0:00:16.023 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nFriday 22 June 2018 09:13:22 -0400 (0:00:00.044) 0:00:16.068 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nFriday 22 June 2018 09:13:22 -0400 (0:00:00.044) 0:00:16.112 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nFriday 22 June 2018 09:13:22 -0400 (0:00:00.044) 0:00:16.156 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nFriday 22 June 2018 09:13:22 -0400 (0:00:00.045) 0:00:16.202 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nFriday 22 June 2018 09:13:22 -0400 (0:00:00.167) 0:00:16.370 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nFriday 22 June 2018 09:13:22 -0400 (0:00:00.175) 0:00:16.545 *********** \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nFriday 22 June 2018 09:13:28 -0400 (0:00:05.340) 0:00:21.885 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nFriday 22 June 2018 09:13:28 -0400 (0:00:00.046) 0:00:21.932 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nFriday 22 June 2018 09:13:28 -0400 (0:00:00.055) 0:00:21.988 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nFriday 22 June 2018 09:13:28 -0400 (0:00:00.050) 0:00:22.038 *********** \nok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nFriday 22 June 2018 09:13:29 -0400 (0:00:00.937) 0:00:22.976 *********** \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nFriday 22 June 2018 09:13:29 -0400 (0:00:00.073) 0:00:23.050 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nFriday 22 June 2018 09:13:29 -0400 (0:00:00.044) 0:00:23.094 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.024983\", \"end\": \"2018-06-22 13:13:29.839176\", \"rc\": 0, \"start\": \"2018-06-22 13:13:29.814193\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nFriday 22 June 2018 09:13:29 -0400 (0:00:00.506) 0:00:23.600 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nFriday 22 June 2018 09:13:29 -0400 (0:00:00.068) 0:00:23.669 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.027717\", \"end\": \"2018-06-22 13:13:30.426991\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:13:30.399274\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nFriday 22 June 2018 09:13:30 -0400 (0:00:00.521) 0:00:24.190 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nFriday 22 June 2018 09:13:30 -0400 (0:00:00.085) 0:00:24.276 *********** \nok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nFriday 22 June 2018 09:13:30 -0400 (0:00:00.122) 0:00:24.398 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nFriday 22 June 2018 09:13:30 -0400 (0:00:00.084) 0:00:24.483 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nFriday 22 June 2018 09:13:30 -0400 (0:00:00.088) 0:00:24.571 *********** \nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nFriday 22 June 2018 09:13:32 -0400 (0:00:01.242) 0:00:25.814 *********** \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.266) 0:00:26.081 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.043) 0:00:26.124 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.044) 0:00:26.169 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.050) 0:00:26.220 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.048) 0:00:26.268 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.047) 0:00:26.316 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.045) 0:00:26.361 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.053) 0:00:26.414 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.043) 0:00:26.458 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.049) 0:00:26.507 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.042) 0:00:26.549 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.041) 0:00:26.591 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.043) 0:00:26.634 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.047) 0:00:26.681 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.045) 0:00:26.727 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nFriday 22 June 2018 09:13:32 -0400 (0:00:00.042) 0:00:26.769 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nFriday 22 June 2018 09:13:33 -0400 (0:00:00.046) 0:00:26.816 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nFriday 22 June 2018 09:13:33 -0400 (0:00:00.046) 0:00:26.862 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nFriday 22 June 2018 09:13:33 -0400 (0:00:00.052) 0:00:26.914 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nFriday 22 June 2018 09:13:33 -0400 (0:00:00.043) 0:00:26.958 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nFriday 22 June 2018 09:13:33 -0400 (0:00:00.046) 0:00:27.004 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nFriday 22 June 2018 09:13:33 -0400 (0:00:00.041) 0:00:27.046 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nFriday 22 June 2018 09:13:33 -0400 (0:00:00.040) 0:00:27.087 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nFriday 22 June 2018 09:13:33 -0400 (0:00:00.046) 0:00:27.134 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nFriday 22 June 2018 09:13:33 -0400 (0:00:00.042) 0:00:27.176 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nFriday 22 June 2018 09:13:33 -0400 (0:00:00.045) 0:00:27.222 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nFriday 22 June 2018 09:13:33 -0400 (0:00:00.042) 0:00:27.264 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nFriday 22 June 2018 09:13:33 -0400 (0:00:00.045) 0:00:27.309 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nFriday 22 June 2018 09:13:33 -0400 (0:00:00.043) 0:00:27.352 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nFriday 22 June 2018 09:13:33 -0400 (0:00:00.051) 0:00:27.404 *********** \nok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:16.555002\", \"end\": \"2018-06-22 13:13:50.800180\", \"rc\": 0, \"start\": \"2018-06-22 13:13:34.245178\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Verifying Checksum\\nb8aa42cec17a: Download complete\\n9a32f102e677: Verifying Checksum\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Verifying Checksum\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Verifying Checksum\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nFriday 22 June 2018 09:13:50 -0400 (0:00:17.161) 0:00:44.566 *********** \nchanged: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.029250\", \"end\": \"2018-06-22 13:13:51.426584\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:13:51.397334\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nFriday 22 June 2018 09:13:51 -0400 (0:00:00.629) 0:00:45.196 *********** \nok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nFriday 22 June 2018 09:13:51 -0400 (0:00:00.183) 0:00:45.379 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nFriday 22 June 2018 09:13:51 -0400 (0:00:00.049) 0:00:45.428 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nFriday 22 June 2018 09:13:51 -0400 (0:00:00.042) 0:00:45.471 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nFriday 22 June 2018 09:13:51 -0400 (0:00:00.041) 0:00:45.512 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nFriday 22 June 2018 09:13:51 -0400 (0:00:00.046) 0:00:45.558 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nFriday 22 June 2018 09:13:51 -0400 (0:00:00.049) 0:00:45.608 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nFriday 22 June 2018 09:13:51 -0400 (0:00:00.143) 0:00:45.751 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nFriday 22 June 2018 09:13:52 -0400 (0:00:00.046) 0:00:45.798 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nFriday 22 June 2018 09:13:52 -0400 (0:00:00.048) 0:00:45.846 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nFriday 22 June 2018 09:13:52 -0400 (0:00:00.046) 0:00:45.892 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nFriday 22 June 2018 09:13:52 -0400 (0:00:00.042) 0:00:45.935 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nFriday 22 June 2018 09:13:52 -0400 (0:00:00.050) 0:00:45.985 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.560244\", \"end\": \"2018-06-22 13:13:53.270265\", \"rc\": 0, \"start\": \"2018-06-22 13:13:52.710021\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nFriday 22 June 2018 09:13:53 -0400 (0:00:01.046) 0:00:47.032 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nFriday 22 June 2018 09:13:53 -0400 (0:00:00.071) 0:00:47.103 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nFriday 22 June 2018 09:13:53 -0400 (0:00:00.049) 0:00:47.153 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nFriday 22 June 2018 09:13:53 -0400 (0:00:00.045) 0:00:47.199 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nFriday 22 June 2018 09:13:53 -0400 (0:00:00.070) 0:00:47.269 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nFriday 22 June 2018 09:13:53 -0400 (0:00:00.045) 0:00:47.315 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nFriday 22 June 2018 09:13:53 -0400 (0:00:00.046) 0:00:47.361 *********** \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nFriday 22 June 2018 09:13:55 -0400 (0:00:02.180) 0:00:49.542 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nFriday 22 June 2018 09:13:55 -0400 (0:00:00.048) 0:00:49.590 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nFriday 22 June 2018 09:13:55 -0400 (0:00:00.048) 0:00:49.639 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nFriday 22 June 2018 09:13:56 -0400 (0:00:00.215) 0:00:49.854 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nFriday 22 June 2018 09:13:56 -0400 (0:00:00.050) 0:00:49.905 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nFriday 22 June 2018 09:13:56 -0400 (0:00:00.047) 0:00:49.953 *********** \nchanged: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nFriday 22 June 2018 09:13:56 -0400 (0:00:00.487) 0:00:50.440 *********** \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for controller-0\nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"8376233e5a1bc87f2c4fab91f94a8b75f6c6a2f6\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"0f740ab4fb6329f001a8e004a4e1d994\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 761, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673236.71-134691013098495/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nFriday 22 June 2018 09:13:59 -0400 (0:00:03.324) 0:00:53.765 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact docker_exec_cmd] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:2\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.042) 0:00:53.808 *********** \nok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-mon : make sure monitor_interface or monitor_address or monitor_address_block is configured] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:2\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.069) 0:00:53.877 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : generate monitor initial keyring] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:2\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.052) 0:00:53.929 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : read monitor initial keyring if it already exists] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:11\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.044) 0:00:53.973 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create monitor initial keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.049) 0:00:54.023 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set initial monitor key permissions] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:34\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.042) 0:00:54.065 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create (and fix ownership of) monitor directory] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:42\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.044) 0:00:54.109 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact client_admin_ceph_authtool_cap >= ceph_release_num.luminous] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:51\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.044) 0:00:54.154 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact client_admin_ceph_authtool_cap < ceph_release_num.luminous] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:63\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.197 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create custom admin keyring] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:74\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.241 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set ownership of admin keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:88\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.042) 0:00:54.284 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : import admin keyring into mon keyring] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:99\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.327 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ceph monitor mkfs with keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:106\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.044) 0:00:54.371 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ceph monitor mkfs without keyring] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:113\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.415 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ensure systemd service override directory exists] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:2\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.042) 0:00:54.458 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : add ceph-mon systemd service overrides] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:10\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.052) 0:00:54.510 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : start the monitor service] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:20\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.554 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : enable the ceph-mon.target service] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:29\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.598 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : include ceph_keys.yml] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:19\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.641 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : collect all the pools] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:2\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.684 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : secure the cluster] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:7\nFriday 22 June 2018 09:14:00 -0400 (0:00:00.041) 0:00:54.726 *********** \n\nTASK [ceph-mon : set_fact ceph_config_keys] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:2\nFriday 22 June 2018 09:14:01 -0400 (0:00:00.046) 0:00:54.773 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : register rbd bootstrap key] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:11\nFriday 22 June 2018 09:14:01 -0400 (0:00:00.074) 0:00:54.848 *********** \nok: [controller-0] => {\"ansible_facts\": {\"bootstrap_rbd_keyring\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : merge rbd bootstrap key to config and keys paths] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:17\nFriday 22 June 2018 09:14:01 -0400 (0:00:00.070) 0:00:54.918 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : stat for ceph config and keys] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:22\nFriday 22 June 2018 09:14:01 -0400 (0:00:00.075) 0:00:54.994 *********** \nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}\n\nTASK [ceph-mon : try to copy ceph keys] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:33\nFriday 22 June 2018 09:14:02 -0400 (0:00:00.854) 0:00:55.848 *********** \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : populate kv_store with default ceph.conf] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:2\nFriday 22 June 2018 09:14:02 -0400 (0:00:00.121) 0:00:55.969 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : populate kv_store with custom ceph.conf] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:18\nFriday 22 June 2018 09:14:02 -0400 (0:00:00.047) 0:00:56.017 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : delete populate-kv-store docker] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:36\nFriday 22 June 2018 09:14:02 -0400 (0:00:00.062) 0:00:56.080 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:43\nFriday 22 June 2018 09:14:02 -0400 (0:00:00.045) 0:00:56.126 *********** \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"c295bd0e2b9ac132014f0c7ae2b5171a5053fe0b\", \"dest\": \"/etc/systemd/system/ceph-mon@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"ad5a25ce16b55be4b0d5e4bf757255da\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 835, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673242.39-172271329724894/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mon : systemd start mon container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:54\nFriday 22 June 2018 09:14:05 -0400 (0:00:02.778) 0:00:58.904 *********** \nok: [controller-0] => {\"changed\": false, \"enabled\": true, \"name\": \"ceph-mon@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"basic.target system-ceph\\\\x5cx2dmon.slice docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Monitor\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --name ceph-mon-%i --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro --net=host -e IP_VERSION=4 -e MON_IP=172.17.3.18 -e CLUSTER=ceph -e FSID=53912472-747b-11e8-95a3-5254003d7dcb -e CEPH_PUBLIC_NETWORK=172.17.3.0/24 -e CEPH_DAEMON=MON 192.168.24.1:8787/rhceph:3-6 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mon@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mon@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127793\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127793\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mon@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmon.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmon.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-mon : configure ceph profile.d aliases] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml:2\nFriday 22 June 2018 09:14:06 -0400 (0:00:00.895) 0:00:59.800 *********** \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"78965c7dfcde4827c1cb8645bc7a444472e87718\", \"dest\": \"/etc/profile.d/ceph-aliases.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"66a9bfe5c26a22ade3c67cc7c7a58d2c\", \"mode\": \"0755\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:bin_t:s0\", \"size\": 375, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673246.18-258778462067608/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mon : wait for monitor socket to exist] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:12\nFriday 22 June 2018 09:14:08 -0400 (0:00:02.628) 0:01:02.429 *********** \nchanged: [controller-0] => {\"attempts\": 1, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"sh\", \"-c\", \"stat /var/run/ceph/ceph-mon.controller-0.asok || stat /var/run/ceph/ceph-mon.controller-0.localdomain.asok\"], \"delta\": \"0:00:00.083078\", \"end\": \"2018-06-22 13:14:09.344607\", \"rc\": 0, \"start\": \"2018-06-22 13:14:09.261529\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: '/var/run/ceph/ceph-mon.controller-0.asok'\\n Size: 0 \\tBlocks: 0 IO Block: 4096 socket\\nDevice: 14h/20d\\tInode: 371425 Links: 1\\nAccess: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\\nAccess: 2018-06-22 13:14:07.048930719 +0000\\nModify: 2018-06-22 13:14:07.048930719 +0000\\nChange: 2018-06-22 13:14:07.048930719 +0000\\n Birth: -\", \"stdout_lines\": [\" File: '/var/run/ceph/ceph-mon.controller-0.asok'\", \" Size: 0 \\tBlocks: 0 IO Block: 4096 socket\", \"Device: 14h/20d\\tInode: 371425 Links: 1\", \"Access: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\", \"Access: 2018-06-22 13:14:07.048930719 +0000\", \"Modify: 2018-06-22 13:14:07.048930719 +0000\", \"Change: 2018-06-22 13:14:07.048930719 +0000\", \" Birth: -\"]}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:19\nFriday 22 June 2018 09:14:09 -0400 (0:00:00.680) 0:01:03.110 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:29\nFriday 22 June 2018 09:14:09 -0400 (0:00:00.093) 0:01:03.203 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:39\nFriday 22 June 2018 09:14:09 -0400 (0:00:00.087) 0:01:03.291 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--admin-daemon\", \"/var/run/ceph/ceph-mon.controller-0.asok\", \"add_bootstrap_peer_hint\", \"172.17.3.18\"], \"delta\": \"0:00:00.186992\", \"end\": \"2018-06-22 13:14:10.515708\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:14:10.328716\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"mon already active; ignoring bootstrap hint\", \"stdout_lines\": [\"mon already active; ignoring bootstrap hint\"]}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:49\nFriday 22 June 2018 09:14:10 -0400 (0:00:00.986) 0:01:04.278 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:59\nFriday 22 June 2018 09:14:10 -0400 (0:00:00.048) 0:01:04.327 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:69\nFriday 22 June 2018 09:14:10 -0400 (0:00:00.223) 0:01:04.550 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : push ceph files to the ansible server] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2\nFriday 22 June 2018 09:14:10 -0400 (0:00:00.048) 0:01:04.598 *********** \nchanged: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"793b49d83f132a70fc67d6c0569cfa8c71650741\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.client.admin.keyring\", \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"edc649fc880af546c25f69c696fca0fe\", \"remote_checksum\": \"793b49d83f132a70fc67d6c0569cfa8c71650741\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"dae692cfee0fa0a32ffaad10f7d24e310a009db9\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.mon.keyring\", \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"45da627f7c55925963e129ae734f2d5e\", \"remote_checksum\": \"dae692cfee0fa0a32ffaad10f7d24e310a009db9\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"d8a7f9eb9d9dc0395da75fc7759797ea97e335aa\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"5208039d17edb4ccda0d9023c061854b\", \"remote_checksum\": \"d8a7f9eb9d9dc0395da75fc7759797ea97e335aa\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"9613a61f8c01ce2de5a65853e6a5574e32ab15c0\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"9e6c050c69d1e668638ae983ad165248\", \"remote_checksum\": \"9613a61f8c01ce2de5a65853e6a5574e32ab15c0\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"11de432a77f2de2b2705ea5780f568345ba62116\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"782622eddeeebdfdb6434bdb74e33313\", \"remote_checksum\": \"11de432a77f2de2b2705ea5780f568345ba62116\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"fa627b4b6c0e4d6b86f16984405cd43c6dd3021c\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"42c481a7f7e4ffbdc34aade7c3965f84\", \"remote_checksum\": \"fa627b4b6c0e4d6b86f16984405cd43c6dd3021c\", \"remote_md5sum\": null}\n\nTASK [ceph-mon : create ceph rest api keyring when mon is containerized] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:84\nFriday 22 June 2018 09:14:13 -0400 (0:00:02.887) 0:01:07.486 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create ceph mgr keyring(s) when mon is containerized] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:97\nFriday 22 June 2018 09:14:13 -0400 (0:00:00.046) 0:01:07.532 *********** \nok: [controller-0] => (item=controller-0) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"get-or-create\", \"mgr.controller-0\", \"mon\", \"allow profile mgr\", \"osd\", \"allow *\", \"mds\", \"allow *\", \"-o\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"], \"delta\": \"0:00:00.342343\", \"end\": \"2018-06-22 13:14:14.722678\", \"item\": \"controller-0\", \"rc\": 0, \"start\": \"2018-06-22 13:14:14.380335\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-mon : stat for ceph mgr key(s)] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:109\nFriday 22 June 2018 09:14:14 -0400 (0:00:00.952) 0:01:08.485 *********** \nok: [controller-0] => (item=controller-0) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"controller-0\", \"stat\": {\"atime\": 1529673254.5999415, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"f1eb3e81a4f49f68787b67580eb8b9601f3e1e36\", \"ctime\": 1529673254.7039416, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 69329262, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1529673254.7039416, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"18446744073449758241\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\n\nTASK [ceph-mon : fetch ceph mgr key(s)] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:121\nFriday 22 June 2018 09:14:15 -0400 (0:00:00.625) 0:01:09.111 *********** \nchanged: [controller-0] => (item={'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 0, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673254.7039416, u'block_size': 4096, u'inode': 69329262, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'charset': u'us-ascii', u'readable': True, u'version': u'18446744073449758241', u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'root', u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1529673254.5999415, u'mimetype': u'text/plain', u'ctime': 1529673254.7039416, u'isblk': False, u'xgrp': False, u'dev': 64514, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'checksum': u'f1eb3e81a4f49f68787b67580eb8b9601f3e1e36', u'islnk': False, u'attributes': []}, u'changed': False, '_ansible_no_log': False, 'item': u'controller-0', '_ansible_item_result': True, 'failed': False, u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"f1eb3e81a4f49f68787b67580eb8b9601f3e1e36\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.mgr.controller-0.keyring\", \"item\": {\"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"controller-0\", \"stat\": {\"atime\": 1529673254.5999415, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"f1eb3e81a4f49f68787b67580eb8b9601f3e1e36\", \"ctime\": 1529673254.7039416, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 69329262, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1529673254.7039416, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"18446744073449758241\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}, \"md5sum\": \"27b1ed102ad44a0a24aa2cc10f78f0d3\", \"remote_checksum\": \"f1eb3e81a4f49f68787b67580eb8b9601f3e1e36\", \"remote_md5sum\": null}\n\nTASK [ceph-mon : configure crush hierarchy] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:2\nFriday 22 June 2018 09:14:15 -0400 (0:00:00.579) 0:01:09.690 *********** \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create configured crush rules] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:14\nFriday 22 June 2018 09:14:15 -0400 (0:00:00.049) 0:01:09.739 *********** \nskipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : get id for new default crush rule] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:21\nFriday 22 June 2018 09:14:16 -0400 (0:00:00.053) 0:01:09.793 *********** \nskipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact info_ceph_default_crush_rule_yaml] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:33\nFriday 22 June 2018 09:14:16 -0400 (0:00:00.054) 0:01:09.847 *********** \nskipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}, 'changed': False, '_ansible_ignore_errors': None}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}, 'changed': False, '_ansible_ignore_errors': None}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_crush_rule to osd_pool_default_crush_replicated_ruleset if release < luminous else osd_pool_default_crush_rule] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:41\nFriday 22 June 2018 09:14:16 -0400 (0:00:00.056) 0:01:09.903 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : insert new default crush rule into daemon to prevent restart] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:45\nFriday 22 June 2018 09:14:16 -0400 (0:00:00.069) 0:01:09.973 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : add new default crush rule to ceph.conf] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:54\nFriday 22 June 2018 09:14:16 -0400 (0:00:00.072) 0:01:10.046 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : get default value for osd_pool_default_pg_num] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:5\nFriday 22 June 2018 09:14:16 -0400 (0:00:00.048) 0:01:10.095 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num with pool_default_pg_num (backward compatibility)] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:16\nFriday 22 June 2018 09:14:16 -0400 (0:00:00.050) 0:01:10.145 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num with default_pool_default_pg_num.stdout] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:21\nFriday 22 June 2018 09:14:16 -0400 (0:00:00.042) 0:01:10.188 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num ceph_conf_overrides.global.osd_pool_default_pg_num] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:27\nFriday 22 June 2018 09:14:16 -0400 (0:00:00.045) 0:01:10.233 *********** \nok: [controller-0] => {\"ansible_facts\": {\"osd_pool_default_pg_num\": \"32\"}, \"changed\": false}\n\nTASK [ceph-mon : increase calamari logging level when debug is on] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:9\nFriday 22 June 2018 09:14:16 -0400 (0:00:00.070) 0:01:10.303 *********** \nskipping: [controller-0] => (item=cthulhu) => {\"changed\": false, \"item\": \"cthulhu\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=calamari_web) => {\"changed\": false, \"item\": \"calamari_web\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : initialize the calamari server api] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:20\nFriday 22 June 2018 09:14:16 -0400 (0:00:00.047) 0:01:10.351 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nFriday 22 June 2018 09:14:16 -0400 (0:00:00.014) 0:01:10.365 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nFriday 22 June 2018 09:14:16 -0400 (0:00:00.065) 0:01:10.431 *********** \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"a16eea5d614de2b10079cb91a04686e919ccc201\", \"dest\": \"/tmp/restart_mon_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"b59e1abae52d61eb05b9ff080771a551\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 1173, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673256.72-84454899936950/source\", \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nFriday 22 June 2018 09:14:19 -0400 (0:00:02.523) 0:01:12.954 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nFriday 22 June 2018 09:14:19 -0400 (0:00:00.083) 0:01:13.038 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nFriday 22 June 2018 09:14:19 -0400 (0:00:00.118) 0:01:13.156 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nFriday 22 June 2018 09:14:19 -0400 (0:00:00.066) 0:01:13.223 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nFriday 22 June 2018 09:14:19 -0400 (0:00:00.067) 0:01:13.290 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nFriday 22 June 2018 09:14:19 -0400 (0:00:00.044) 0:01:13.334 *********** \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nFriday 22 June 2018 09:14:19 -0400 (0:00:00.072) 0:01:13.407 *********** \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nFriday 22 June 2018 09:14:19 -0400 (0:00:00.078) 0:01:13.485 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nFriday 22 June 2018 09:14:19 -0400 (0:00:00.066) 0:01:13.552 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nFriday 22 June 2018 09:14:19 -0400 (0:00:00.062) 0:01:13.614 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nFriday 22 June 2018 09:14:19 -0400 (0:00:00.042) 0:01:13.656 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nFriday 22 June 2018 09:14:19 -0400 (0:00:00.049) 0:01:13.706 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nFriday 22 June 2018 09:14:19 -0400 (0:00:00.054) 0:01:13.760 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nFriday 22 June 2018 09:14:20 -0400 (0:00:00.162) 0:01:13.923 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nFriday 22 June 2018 09:14:20 -0400 (0:00:00.162) 0:01:14.086 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nFriday 22 June 2018 09:14:20 -0400 (0:00:00.060) 0:01:14.146 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nFriday 22 June 2018 09:14:20 -0400 (0:00:00.081) 0:01:14.227 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nFriday 22 June 2018 09:14:20 -0400 (0:00:00.078) 0:01:14.306 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nFriday 22 June 2018 09:14:20 -0400 (0:00:00.216) 0:01:14.522 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nFriday 22 June 2018 09:14:20 -0400 (0:00:00.170) 0:01:14.693 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nFriday 22 June 2018 09:14:20 -0400 (0:00:00.049) 0:01:14.742 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nFriday 22 June 2018 09:14:21 -0400 (0:00:00.059) 0:01:14.802 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nFriday 22 June 2018 09:14:21 -0400 (0:00:00.057) 0:01:14.859 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nFriday 22 June 2018 09:14:21 -0400 (0:00:00.164) 0:01:15.024 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nFriday 22 June 2018 09:14:21 -0400 (0:00:00.193) 0:01:15.217 *********** \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"f36b3460f6762a853a3dab1958afb7d83ff8f234\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"9d50588dc55f43284b00033b8b30edc3\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 570, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673261.64-37075549057491/source\", \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nFriday 22 June 2018 09:14:24 -0400 (0:00:02.583) 0:01:17.801 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nFriday 22 June 2018 09:14:24 -0400 (0:00:00.094) 0:01:17.895 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nFriday 22 June 2018 09:14:24 -0400 (0:00:00.138) 0:01:18.033 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [set ceph monitor install 'Complete'] *************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:98\nFriday 22 June 2018 09:14:24 -0400 (0:00:00.112) 0:01:18.145 *********** \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"end\": \"20180622091424Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mgrs] ********************************************************************\n\nTASK [set ceph manager install 'In Progress'] **********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:110\nFriday 22 June 2018 09:14:24 -0400 (0:00:00.148) 0:01:18.294 *********** \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"start\": \"20180622091424Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nFriday 22 June 2018 09:14:24 -0400 (0:00:00.081) 0:01:18.376 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.027560\", \"end\": \"2018-06-22 13:14:25.146567\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:14:25.119007\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"2d71e99d5d90\", \"stdout_lines\": [\"2d71e99d5d90\"]}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nFriday 22 June 2018 09:14:25 -0400 (0:00:00.532) 0:01:18.908 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nFriday 22 June 2018 09:14:25 -0400 (0:00:00.046) 0:01:18.955 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nFriday 22 June 2018 09:14:25 -0400 (0:00:00.049) 0:01:19.004 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nFriday 22 June 2018 09:14:25 -0400 (0:00:00.046) 0:01:19.051 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.028446\", \"end\": \"2018-06-22 13:14:25.815683\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:14:25.787237\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nFriday 22 June 2018 09:14:25 -0400 (0:00:00.525) 0:01:19.576 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nFriday 22 June 2018 09:14:25 -0400 (0:00:00.046) 0:01:19.623 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nFriday 22 June 2018 09:14:25 -0400 (0:00:00.044) 0:01:19.667 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nFriday 22 June 2018 09:14:25 -0400 (0:00:00.053) 0:01:19.720 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nFriday 22 June 2018 09:14:25 -0400 (0:00:00.046) 0:01:19.766 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.046) 0:01:19.812 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.047) 0:01:19.860 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.046) 0:01:19.906 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.049) 0:01:19.955 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.046) 0:01:20.002 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.044) 0:01:20.047 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.042) 0:01:20.090 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.042) 0:01:20.132 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.046) 0:01:20.179 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.052) 0:01:20.231 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.045) 0:01:20.277 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.045) 0:01:20.322 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.045) 0:01:20.367 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.045) 0:01:20.412 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.045) 0:01:20.458 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.043) 0:01:20.501 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.044) 0:01:20.546 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.046) 0:01:20.592 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nFriday 22 June 2018 09:14:26 -0400 (0:00:00.044) 0:01:20.637 *********** \nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nFriday 22 June 2018 09:14:27 -0400 (0:00:00.500) 0:01:21.137 *********** \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nFriday 22 June 2018 09:14:27 -0400 (0:00:00.069) 0:01:21.206 *********** \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nFriday 22 June 2018 09:14:27 -0400 (0:00:00.070) 0:01:21.277 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nFriday 22 June 2018 09:14:27 -0400 (0:00:00.066) 0:01:21.343 *********** \nok: [controller-0 -> 192.168.24.8] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nFriday 22 June 2018 09:14:27 -0400 (0:00:00.134) 0:01:21.478 *********** \nok: [controller-0 -> 192.168.24.8] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.331332\", \"end\": \"2018-06-22 13:14:28.558606\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:14:28.227274\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"53912472-747b-11e8-95a3-5254003d7dcb\", \"stdout_lines\": [\"53912472-747b-11e8-95a3-5254003d7dcb\"]}\n\nTASK [ceph-defaults : check if /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nFriday 22 June 2018 09:14:28 -0400 (0:00:00.848) 0:01:22.326 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nFriday 22 June 2018 09:14:28 -0400 (0:00:00.184) 0:01:22.511 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nFriday 22 June 2018 09:14:28 -0400 (0:00:00.050) 0:01:22.562 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 50, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nFriday 22 June 2018 09:14:28 -0400 (0:00:00.185) 0:01:22.748 *********** \nok: [controller-0] => {\"ansible_facts\": {\"fsid\": \"53912472-747b-11e8-95a3-5254003d7dcb\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nFriday 22 June 2018 09:14:29 -0400 (0:00:00.169) 0:01:22.917 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85\nFriday 22 June 2018 09:14:29 -0400 (0:00:00.245) 0:01:23.162 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96\nFriday 22 June 2018 09:14:29 -0400 (0:00:00.044) 0:01:23.207 *********** \nchanged: [controller-0 -> localhost] => {\"changed\": true, \"cmd\": \"echo 53912472-747b-11e8-95a3-5254003d7dcb | tee /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"delta\": \"0:00:00.005088\", \"end\": \"2018-06-22 09:14:29.578341\", \"rc\": 0, \"start\": \"2018-06-22 09:14:29.573253\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"53912472-747b-11e8-95a3-5254003d7dcb\", \"stdout_lines\": [\"53912472-747b-11e8-95a3-5254003d7dcb\"]}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105\nFriday 22 June 2018 09:14:29 -0400 (0:00:00.185) 0:01:23.392 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117\nFriday 22 June 2018 09:14:29 -0400 (0:00:00.041) 0:01:23.433 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123\nFriday 22 June 2018 09:14:29 -0400 (0:00:00.038) 0:01:23.471 *********** \nok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129\nFriday 22 June 2018 09:14:29 -0400 (0:00:00.074) 0:01:23.546 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135\nFriday 22 June 2018 09:14:29 -0400 (0:00:00.040) 0:01:23.587 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nFriday 22 June 2018 09:14:29 -0400 (0:00:00.042) 0:01:23.629 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nFriday 22 June 2018 09:14:29 -0400 (0:00:00.043) 0:01:23.672 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nFriday 22 June 2018 09:14:29 -0400 (0:00:00.043) 0:01:23.716 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166\nFriday 22 June 2018 09:14:29 -0400 (0:00:00.046) 0:01:23.762 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175\nFriday 22 June 2018 09:14:30 -0400 (0:00:00.055) 0:01:23.818 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183\nFriday 22 June 2018 09:14:30 -0400 (0:00:00.045) 0:01:23.863 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nFriday 22 June 2018 09:14:30 -0400 (0:00:00.043) 0:01:23.907 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nFriday 22 June 2018 09:14:30 -0400 (0:00:00.047) 0:01:23.955 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nFriday 22 June 2018 09:14:30 -0400 (0:00:00.044) 0:01:24.000 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nFriday 22 June 2018 09:14:30 -0400 (0:00:00.049) 0:01:24.049 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nFriday 22 June 2018 09:14:30 -0400 (0:00:00.073) 0:01:24.123 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nFriday 22 June 2018 09:14:30 -0400 (0:00:00.070) 0:01:24.193 *********** \nok: [controller-0] => (item=/etc/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 160, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 28, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 35, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/run/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 60, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nFriday 22 June 2018 09:14:35 -0400 (0:00:05.361) 0:01:29.554 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nFriday 22 June 2018 09:14:35 -0400 (0:00:00.052) 0:01:29.607 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nFriday 22 June 2018 09:14:35 -0400 (0:00:00.059) 0:01:29.666 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nFriday 22 June 2018 09:14:35 -0400 (0:00:00.052) 0:01:29.718 *********** \nok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nFriday 22 June 2018 09:14:36 -0400 (0:00:00.937) 0:01:30.656 *********** \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nFriday 22 June 2018 09:14:36 -0400 (0:00:00.075) 0:01:30.731 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nFriday 22 June 2018 09:14:37 -0400 (0:00:00.045) 0:01:30.777 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.026729\", \"end\": \"2018-06-22 13:14:37.541626\", \"rc\": 0, \"start\": \"2018-06-22 13:14:37.514897\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nFriday 22 June 2018 09:14:37 -0400 (0:00:00.521) 0:01:31.299 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nFriday 22 June 2018 09:14:37 -0400 (0:00:00.069) 0:01:31.368 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.027566\", \"end\": \"2018-06-22 13:14:38.144549\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:14:38.116983\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"2d71e99d5d90\", \"stdout_lines\": [\"2d71e99d5d90\"]}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.532) 0:01:31.901 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.050) 0:01:31.952 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.053) 0:01:32.005 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.047) 0:01:32.053 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.053) 0:01:32.106 *********** \nskipping: [controller-0] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.100) 0:01:32.207 *********** \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.105) 0:01:32.313 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.039) 0:01:32.352 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.039) 0:01:32.392 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.047) 0:01:32.439 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.045) 0:01:32.484 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.045) 0:01:32.530 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.042) 0:01:32.573 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.040) 0:01:32.613 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nFriday 22 June 2018 09:14:38 -0400 (0:00:00.040) 0:01:32.654 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"2d71e99d5d90\"], \"delta\": \"0:00:00.031052\", \"end\": \"2018-06-22 13:14:39.535609\", \"rc\": 0, \"start\": \"2018-06-22 13:14:39.504557\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105\\\",\\n \\\"Created\\\": \\\"2018-06-22T13:14:06.054795034Z\\\",\\n \\\"Path\\\": \\\"/entrypoint.sh\\\",\\n \\\"Args\\\": [],\\n \\\"State\\\": {\\n \\\"Status\\\": \\\"running\\\",\\n \\\"Running\\\": true,\\n \\\"Paused\\\": false,\\n \\\"Restarting\\\": false,\\n \\\"OOMKilled\\\": false,\\n \\\"Dead\\\": false,\\n \\\"Pid\\\": 50029,\\n \\\"ExitCode\\\": 0,\\n \\\"Error\\\": \\\"\\\",\\n \\\"StartedAt\\\": \\\"2018-06-22T13:14:06.243843393Z\\\",\\n \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\\n },\\n \\\"Image\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105/resolv.conf\\\",\\n \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105/hostname\\\",\\n \\\"HostsPath\\\": \\\"/var/lib/docker/containers/2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105/hosts\\\",\\n \\\"LogPath\\\": \\\"\\\",\\n \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\\n \\\"RestartCount\\\": 0,\\n \\\"Driver\\\": \\\"overlay2\\\",\\n \\\"MountLabel\\\": \\\"\\\",\\n \\\"ProcessLabel\\\": \\\"\\\",\\n \\\"AppArmorProfile\\\": \\\"\\\",\\n \\\"ExecIDs\\\": null,\\n \\\"HostConfig\\\": {\\n \\\"Binds\\\": [\\n \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\\n \\\"/etc/ceph:/etc/ceph:z\\\",\\n \\\"/var/run/ceph:/var/run/ceph:z\\\",\\n \\\"/etc/localtime:/etc/localtime:ro\\\"\\n ],\\n \\\"ContainerIDFile\\\": \\\"\\\",\\n \\\"LogConfig\\\": {\\n \\\"Type\\\": \\\"journald\\\",\\n \\\"Config\\\": {}\\n },\\n \\\"NetworkMode\\\": \\\"host\\\",\\n \\\"PortBindings\\\": {},\\n \\\"RestartPolicy\\\": {\\n \\\"Name\\\": \\\"no\\\",\\n \\\"MaximumRetryCount\\\": 0\\n },\\n \\\"AutoRemove\\\": true,\\n \\\"VolumeDriver\\\": \\\"\\\",\\n \\\"VolumesFrom\\\": null,\\n \\\"CapAdd\\\": null,\\n \\\"CapDrop\\\": null,\\n \\\"Dns\\\": [],\\n \\\"DnsOptions\\\": [],\\n \\\"DnsSearch\\\": [],\\n \\\"ExtraHosts\\\": null,\\n \\\"GroupAdd\\\": null,\\n \\\"IpcMode\\\": \\\"\\\",\\n \\\"Cgroup\\\": \\\"\\\",\\n \\\"Links\\\": null,\\n \\\"OomScoreAdj\\\": 0,\\n \\\"PidMode\\\": \\\"\\\",\\n \\\"Privileged\\\": false,\\n \\\"PublishAllPorts\\\": false,\\n \\\"ReadonlyRootfs\\\": false,\\n \\\"SecurityOpt\\\": null,\\n \\\"UTSMode\\\": \\\"\\\",\\n \\\"UsernsMode\\\": \\\"\\\",\\n \\\"ShmSize\\\": 67108864,\\n \\\"Runtime\\\": \\\"docker-runc\\\",\\n \\\"ConsoleSize\\\": [\\n 0,\\n 0\\n ],\\n \\\"Isolation\\\": \\\"\\\",\\n \\\"CpuShares\\\": 0,\\n \\\"Memory\\\": 1073741824,\\n \\\"NanoCpus\\\": 0,\\n \\\"CgroupParent\\\": \\\"\\\",\\n \\\"BlkioWeight\\\": 0,\\n \\\"BlkioWeightDevice\\\": null,\\n \\\"BlkioDeviceReadBps\\\": null,\\n \\\"BlkioDeviceWriteBps\\\": null,\\n \\\"BlkioDeviceReadIOps\\\": null,\\n \\\"BlkioDeviceWriteIOps\\\": null,\\n \\\"CpuPeriod\\\": 0,\\n \\\"CpuQuota\\\": 100000,\\n \\\"CpuRealtimePeriod\\\": 0,\\n \\\"CpuRealtimeRuntime\\\": 0,\\n \\\"CpusetCpus\\\": \\\"\\\",\\n \\\"CpusetMems\\\": \\\"\\\",\\n \\\"Devices\\\": [],\\n \\\"DiskQuota\\\": 0,\\n \\\"KernelMemory\\\": 0,\\n \\\"MemoryReservation\\\": 0,\\n \\\"MemorySwap\\\": 2147483648,\\n \\\"MemorySwappiness\\\": -1,\\n \\\"OomKillDisable\\\": false,\\n \\\"PidsLimit\\\": 0,\\n \\\"Ulimits\\\": null,\\n \\\"CpuCount\\\": 0,\\n \\\"CpuPercent\\\": 0,\\n \\\"IOMaximumIOps\\\": 0,\\n \\\"IOMaximumBandwidth\\\": 0\\n },\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558-init/diff:/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff:/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558/work\\\"\\n }\\n },\\n \\\"Mounts\\\": [\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/localtime\\\",\\n \\\"Destination\\\": \\\"/etc/localtime\\\",\\n \\\"Mode\\\": \\\"ro\\\",\\n \\\"RW\\\": false,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"volume\\\",\\n \\\"Name\\\": \\\"d532fedca1b6d8392347154e71bf722e79d74fd82670fc2a49f8d3fc1d56d161\\\",\\n \\\"Source\\\": \\\"/var/lib/docker/volumes/d532fedca1b6d8392347154e71bf722e79d74fd82670fc2a49f8d3fc1d56d161/_data\\\",\\n \\\"Destination\\\": \\\"/etc/ganesha\\\",\\n \\\"Driver\\\": \\\"local\\\",\\n \\\"Mode\\\": \\\"\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/lib/ceph\\\",\\n \\\"Destination\\\": \\\"/var/lib/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/ceph\\\",\\n \\\"Destination\\\": \\\"/etc/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/run/ceph\\\",\\n \\\"Destination\\\": \\\"/var/run/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n }\\n ],\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"controller-0\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": true,\\n \\\"AttachStderr\\\": true,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"IP_VERSION=4\\\",\\n \\\"MON_IP=172.17.3.18\\\",\\n \\\"CLUSTER=ceph\\\",\\n \\\"FSID=53912472-747b-11e8-95a3-5254003d7dcb\\\",\\n \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\\n \\\"CEPH_DAEMON=MON\\\",\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-6\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": null,\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"NetworkSettings\\\": {\\n \\\"Bridge\\\": \\\"\\\",\\n \\\"SandboxID\\\": \\\"b3067360b0180302c4d89730192f368dac349d894129a3da6d44325aa6eb1c61\\\",\\n \\\"HairpinMode\\\": false,\\n \\\"LinkLocalIPv6Address\\\": \\\"\\\",\\n \\\"LinkLocalIPv6PrefixLen\\\": 0,\\n \\\"Ports\\\": {},\\n \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\\n \\\"SecondaryIPAddresses\\\": null,\\n \\\"SecondaryIPv6Addresses\\\": null,\\n \\\"EndpointID\\\": \\\"\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"MacAddress\\\": \\\"\\\",\\n \\\"Networks\\\": {\\n \\\"host\\\": {\\n \\\"IPAMConfig\\\": null,\\n \\\"Links\\\": null,\\n \\\"Aliases\\\": null,\\n \\\"NetworkID\\\": \\\"711dcc9ffeccb18f54b7514bd551f9bdb54b06d72e8dc7b01a2c8e3b296c8f01\\\",\\n \\\"EndpointID\\\": \\\"8ce97204aa7fce9ca1ea5681bede3d64665fa1799f687a4ddc2655cd0e5c0312\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"MacAddress\\\": \\\"\\\"\\n }\\n }\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105\\\",\", \" \\\"Created\\\": \\\"2018-06-22T13:14:06.054795034Z\\\",\", \" \\\"Path\\\": \\\"/entrypoint.sh\\\",\", \" \\\"Args\\\": [],\", \" \\\"State\\\": {\", \" \\\"Status\\\": \\\"running\\\",\", \" \\\"Running\\\": true,\", \" \\\"Paused\\\": false,\", \" \\\"Restarting\\\": false,\", \" \\\"OOMKilled\\\": false,\", \" \\\"Dead\\\": false,\", \" \\\"Pid\\\": 50029,\", \" \\\"ExitCode\\\": 0,\", \" \\\"Error\\\": \\\"\\\",\", \" \\\"StartedAt\\\": \\\"2018-06-22T13:14:06.243843393Z\\\",\", \" \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\", \" },\", \" \\\"Image\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105/resolv.conf\\\",\", \" \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105/hostname\\\",\", \" \\\"HostsPath\\\": \\\"/var/lib/docker/containers/2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105/hosts\\\",\", \" \\\"LogPath\\\": \\\"\\\",\", \" \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\", \" \\\"RestartCount\\\": 0,\", \" \\\"Driver\\\": \\\"overlay2\\\",\", \" \\\"MountLabel\\\": \\\"\\\",\", \" \\\"ProcessLabel\\\": \\\"\\\",\", \" \\\"AppArmorProfile\\\": \\\"\\\",\", \" \\\"ExecIDs\\\": null,\", \" \\\"HostConfig\\\": {\", \" \\\"Binds\\\": [\", \" \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\", \" \\\"/etc/ceph:/etc/ceph:z\\\",\", \" \\\"/var/run/ceph:/var/run/ceph:z\\\",\", \" \\\"/etc/localtime:/etc/localtime:ro\\\"\", \" ],\", \" \\\"ContainerIDFile\\\": \\\"\\\",\", \" \\\"LogConfig\\\": {\", \" \\\"Type\\\": \\\"journald\\\",\", \" \\\"Config\\\": {}\", \" },\", \" \\\"NetworkMode\\\": \\\"host\\\",\", \" \\\"PortBindings\\\": {},\", \" \\\"RestartPolicy\\\": {\", \" \\\"Name\\\": \\\"no\\\",\", \" \\\"MaximumRetryCount\\\": 0\", \" },\", \" \\\"AutoRemove\\\": true,\", \" \\\"VolumeDriver\\\": \\\"\\\",\", \" \\\"VolumesFrom\\\": null,\", \" \\\"CapAdd\\\": null,\", \" \\\"CapDrop\\\": null,\", \" \\\"Dns\\\": [],\", \" \\\"DnsOptions\\\": [],\", \" \\\"DnsSearch\\\": [],\", \" \\\"ExtraHosts\\\": null,\", \" \\\"GroupAdd\\\": null,\", \" \\\"IpcMode\\\": \\\"\\\",\", \" \\\"Cgroup\\\": \\\"\\\",\", \" \\\"Links\\\": null,\", \" \\\"OomScoreAdj\\\": 0,\", \" \\\"PidMode\\\": \\\"\\\",\", \" \\\"Privileged\\\": false,\", \" \\\"PublishAllPorts\\\": false,\", \" \\\"ReadonlyRootfs\\\": false,\", \" \\\"SecurityOpt\\\": null,\", \" \\\"UTSMode\\\": \\\"\\\",\", \" \\\"UsernsMode\\\": \\\"\\\",\", \" \\\"ShmSize\\\": 67108864,\", \" \\\"Runtime\\\": \\\"docker-runc\\\",\", \" \\\"ConsoleSize\\\": [\", \" 0,\", \" 0\", \" ],\", \" \\\"Isolation\\\": \\\"\\\",\", \" \\\"CpuShares\\\": 0,\", \" \\\"Memory\\\": 1073741824,\", \" \\\"NanoCpus\\\": 0,\", \" \\\"CgroupParent\\\": \\\"\\\",\", \" \\\"BlkioWeight\\\": 0,\", \" \\\"BlkioWeightDevice\\\": null,\", \" \\\"BlkioDeviceReadBps\\\": null,\", \" \\\"BlkioDeviceWriteBps\\\": null,\", \" \\\"BlkioDeviceReadIOps\\\": null,\", \" \\\"BlkioDeviceWriteIOps\\\": null,\", \" \\\"CpuPeriod\\\": 0,\", \" \\\"CpuQuota\\\": 100000,\", \" \\\"CpuRealtimePeriod\\\": 0,\", \" \\\"CpuRealtimeRuntime\\\": 0,\", \" \\\"CpusetCpus\\\": \\\"\\\",\", \" \\\"CpusetMems\\\": \\\"\\\",\", \" \\\"Devices\\\": [],\", \" \\\"DiskQuota\\\": 0,\", \" \\\"KernelMemory\\\": 0,\", \" \\\"MemoryReservation\\\": 0,\", \" \\\"MemorySwap\\\": 2147483648,\", \" \\\"MemorySwappiness\\\": -1,\", \" \\\"OomKillDisable\\\": false,\", \" \\\"PidsLimit\\\": 0,\", \" \\\"Ulimits\\\": null,\", \" \\\"CpuCount\\\": 0,\", \" \\\"CpuPercent\\\": 0,\", \" \\\"IOMaximumIOps\\\": 0,\", \" \\\"IOMaximumBandwidth\\\": 0\", \" },\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558-init/diff:/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff:/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558/work\\\"\", \" }\", \" },\", \" \\\"Mounts\\\": [\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/localtime\\\",\", \" \\\"Destination\\\": \\\"/etc/localtime\\\",\", \" \\\"Mode\\\": \\\"ro\\\",\", \" \\\"RW\\\": false,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"volume\\\",\", \" \\\"Name\\\": \\\"d532fedca1b6d8392347154e71bf722e79d74fd82670fc2a49f8d3fc1d56d161\\\",\", \" \\\"Source\\\": \\\"/var/lib/docker/volumes/d532fedca1b6d8392347154e71bf722e79d74fd82670fc2a49f8d3fc1d56d161/_data\\\",\", \" \\\"Destination\\\": \\\"/etc/ganesha\\\",\", \" \\\"Driver\\\": \\\"local\\\",\", \" \\\"Mode\\\": \\\"\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/ceph\\\",\", \" \\\"Destination\\\": \\\"/etc/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/run/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/run/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" }\", \" ],\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"controller-0\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": true,\", \" \\\"AttachStderr\\\": true,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"IP_VERSION=4\\\",\", \" \\\"MON_IP=172.17.3.18\\\",\", \" \\\"CLUSTER=ceph\\\",\", \" \\\"FSID=53912472-747b-11e8-95a3-5254003d7dcb\\\",\", \" \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\", \" \\\"CEPH_DAEMON=MON\\\",\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-6\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": null,\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"NetworkSettings\\\": {\", \" \\\"Bridge\\\": \\\"\\\",\", \" \\\"SandboxID\\\": \\\"b3067360b0180302c4d89730192f368dac349d894129a3da6d44325aa6eb1c61\\\",\", \" \\\"HairpinMode\\\": false,\", \" \\\"LinkLocalIPv6Address\\\": \\\"\\\",\", \" \\\"LinkLocalIPv6PrefixLen\\\": 0,\", \" \\\"Ports\\\": {},\", \" \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\", \" \\\"SecondaryIPAddresses\\\": null,\", \" \\\"SecondaryIPv6Addresses\\\": null,\", \" \\\"EndpointID\\\": \\\"\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"MacAddress\\\": \\\"\\\",\", \" \\\"Networks\\\": {\", \" \\\"host\\\": {\", \" \\\"IPAMConfig\\\": null,\", \" \\\"Links\\\": null,\", \" \\\"Aliases\\\": null,\", \" \\\"NetworkID\\\": \\\"711dcc9ffeccb18f54b7514bd551f9bdb54b06d72e8dc7b01a2c8e3b296c8f01\\\",\", \" \\\"EndpointID\\\": \\\"8ce97204aa7fce9ca1ea5681bede3d64665fa1799f687a4ddc2655cd0e5c0312\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"MacAddress\\\": \\\"\\\"\", \" }\", \" }\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nFriday 22 June 2018 09:14:39 -0400 (0:00:00.660) 0:01:33.315 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nFriday 22 June 2018 09:14:39 -0400 (0:00:00.042) 0:01:33.357 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nFriday 22 June 2018 09:14:39 -0400 (0:00:00.042) 0:01:33.400 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nFriday 22 June 2018 09:14:39 -0400 (0:00:00.044) 0:01:33.444 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nFriday 22 June 2018 09:14:39 -0400 (0:00:00.047) 0:01:33.491 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nFriday 22 June 2018 09:14:39 -0400 (0:00:00.041) 0:01:33.533 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nFriday 22 June 2018 09:14:39 -0400 (0:00:00.043) 0:01:33.576 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\"], \"delta\": \"0:00:00.028122\", \"end\": \"2018-06-22 13:14:40.436694\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:14:40.408572\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nFriday 22 June 2018 09:14:40 -0400 (0:00:00.633) 0:01:34.209 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nFriday 22 June 2018 09:14:40 -0400 (0:00:00.044) 0:01:34.254 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nFriday 22 June 2018 09:14:40 -0400 (0:00:00.046) 0:01:34.300 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nFriday 22 June 2018 09:14:40 -0400 (0:00:00.043) 0:01:34.344 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nFriday 22 June 2018 09:14:40 -0400 (0:00:00.049) 0:01:34.394 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nFriday 22 June 2018 09:14:40 -0400 (0:00:00.045) 0:01:34.439 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nFriday 22 June 2018 09:14:40 -0400 (0:00:00.130) 0:01:34.569 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_mon_image_repodigest_before_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nFriday 22 June 2018 09:14:40 -0400 (0:00:00.085) 0:01:34.655 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nFriday 22 June 2018 09:14:40 -0400 (0:00:00.045) 0:01:34.701 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nFriday 22 June 2018 09:14:40 -0400 (0:00:00.048) 0:01:34.749 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nFriday 22 June 2018 09:14:41 -0400 (0:00:00.046) 0:01:34.795 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nFriday 22 June 2018 09:14:41 -0400 (0:00:00.049) 0:01:34.845 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nFriday 22 June 2018 09:14:41 -0400 (0:00:00.045) 0:01:34.890 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nFriday 22 June 2018 09:14:41 -0400 (0:00:00.046) 0:01:34.937 *********** \nok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.036769\", \"end\": \"2018-06-22 13:14:41.717045\", \"rc\": 0, \"start\": \"2018-06-22 13:14:41.680276\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Image is up to date for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Image is up to date for 192.168.24.1:8787/rhceph:3-6\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nFriday 22 June 2018 09:14:41 -0400 (0:00:00.546) 0:01:35.483 *********** \nchanged: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.029313\", \"end\": \"2018-06-22 13:14:42.243271\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:14:42.213958\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nFriday 22 June 2018 09:14:42 -0400 (0:00:00.531) 0:01:36.015 *********** \nok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nFriday 22 June 2018 09:14:42 -0400 (0:00:00.078) 0:01:36.094 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nFriday 22 June 2018 09:14:42 -0400 (0:00:00.053) 0:01:36.148 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nFriday 22 June 2018 09:14:42 -0400 (0:00:00.044) 0:01:36.192 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nFriday 22 June 2018 09:14:42 -0400 (0:00:00.043) 0:01:36.235 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nFriday 22 June 2018 09:14:42 -0400 (0:00:00.055) 0:01:36.291 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nFriday 22 June 2018 09:14:42 -0400 (0:00:00.050) 0:01:36.342 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nFriday 22 June 2018 09:14:42 -0400 (0:00:00.046) 0:01:36.389 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nFriday 22 June 2018 09:14:42 -0400 (0:00:00.049) 0:01:36.438 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nFriday 22 June 2018 09:14:42 -0400 (0:00:00.044) 0:01:36.483 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nFriday 22 June 2018 09:14:42 -0400 (0:00:00.052) 0:01:36.535 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nFriday 22 June 2018 09:14:42 -0400 (0:00:00.044) 0:01:36.579 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nFriday 22 June 2018 09:14:42 -0400 (0:00:00.043) 0:01:36.623 *********** \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.571111\", \"end\": \"2018-06-22 13:14:43.929929\", \"rc\": 0, \"start\": \"2018-06-22 13:14:43.358818\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nFriday 22 June 2018 09:14:43 -0400 (0:00:01.079) 0:01:37.702 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nFriday 22 June 2018 09:14:44 -0400 (0:00:00.074) 0:01:37.777 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nFriday 22 June 2018 09:14:44 -0400 (0:00:00.047) 0:01:37.825 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nFriday 22 June 2018 09:14:44 -0400 (0:00:00.049) 0:01:37.874 *********** \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nFriday 22 June 2018 09:14:44 -0400 (0:00:00.076) 0:01:37.951 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nFriday 22 June 2018 09:14:44 -0400 (0:00:00.052) 0:01:38.003 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nFriday 22 June 2018 09:14:44 -0400 (0:00:00.051) 0:01:38.055 *********** \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nFriday 22 June 2018 09:14:46 -0400 (0:00:02.332) 0:01:40.387 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nFriday 22 June 2018 09:14:46 -0400 (0:00:00.049) 0:01:40.437 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nFriday 22 June 2018 09:14:46 -0400 (0:00:00.049) 0:01:40.486 *********** \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 80, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nFriday 22 June 2018 09:14:46 -0400 (0:00:00.201) 0:01:40.687 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nFriday 22 June 2018 09:14:46 -0400 (0:00:00.053) 0:01:40.740 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nFriday 22 June 2018 09:14:47 -0400 (0:00:00.046) 0:01:40.787 *********** \nchanged: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nFriday 22 June 2018 09:14:47 -0400 (0:00:00.521) 0:01:41.309 *********** \nok: [controller-0] => {\"changed\": false, \"checksum\": \"8376233e5a1bc87f2c4fab91f94a8b75f6c6a2f6\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"0f740ab4fb6329f001a8e004a4e1d994\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 761, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673287.59-135812560192411/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nFriday 22 June 2018 09:14:49 -0400 (0:00:01.705) 0:01:43.015 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : set_fact docker_exec_cmd] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:2\nFriday 22 June 2018 09:14:49 -0400 (0:00:00.049) 0:01:43.064 *********** \nok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd_mgr\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-mgr : create mgr directory] *****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:2\nFriday 22 June 2018 09:14:49 -0400 (0:00:00.198) 0:01:43.263 *********** \nok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-mgr : copy ceph keyring(s) if needed] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:10\nFriday 22 June 2018 09:14:50 -0400 (0:00:00.615) 0:01:43.879 *********** \nchanged: [controller-0] => (item={u'dest': u'/var/lib/ceph/mgr/ceph-controller-0/keyring', u'name': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"f1eb3e81a4f49f68787b67580eb8b9601f3e1e36\", \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"name\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"md5sum\": \"27b1ed102ad44a0a24aa2cc10f78f0d3\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673290.16-208677308831713/source\", \"state\": \"file\", \"uid\": 167}\nskipping: [controller-0] => (item={u'dest': u'/etc/ceph/ceph.client.admin.keyring', u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"dest\": \"/etc/ceph/ceph.client.admin.keyring\", \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : set mgr key permissions] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:24\nFriday 22 June 2018 09:14:52 -0400 (0:00:02.600) 0:01:46.480 *********** \nok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"state\": \"file\", \"uid\": 167}\n\nTASK [ceph-mgr : install ceph-mgr package on RedHat or SUSE] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:2\nFriday 22 June 2018 09:14:53 -0400 (0:00:00.518) 0:01:46.998 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : install ceph mgr for debian] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:9\nFriday 22 June 2018 09:14:53 -0400 (0:00:00.045) 0:01:47.043 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : ensure systemd service override directory exists] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:17\nFriday 22 June 2018 09:14:53 -0400 (0:00:00.044) 0:01:47.088 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : add ceph-mgr systemd service overrides] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:25\nFriday 22 June 2018 09:14:53 -0400 (0:00:00.046) 0:01:47.135 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : start and add that the mgr service to the init sequence] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:35\nFriday 22 June 2018 09:14:53 -0400 (0:00:00.044) 0:01:47.179 *********** \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:2\nFriday 22 June 2018 09:14:53 -0400 (0:00:00.047) 0:01:47.226 *********** \nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0\nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"fb2f3078fffe963a7fd0473c7b908931939d5c73\", \"dest\": \"/etc/systemd/system/ceph-mgr@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"7b527fb0a44d25cf825cb2b6fcb2b07e\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 733, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673293.6-41326754079202/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mgr : systemd start mgr container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:13\nFriday 22 June 2018 09:14:56 -0400 (0:00:02.875) 0:01:50.102 *********** \nok: [controller-0] => {\"changed\": false, \"enabled\": true, \"name\": \"ceph-mgr@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"systemd-journald.socket basic.target system-ceph\\\\x5cx2dmgr.slice docker.service\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Manager\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro -e CLUSTER=ceph -e CEPH_DAEMON=MGR -e MGR_DASHBOARD=0 --name=ceph-mgr-controller-0 192.168.24.1:8787/rhceph:3-6 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mgr@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mgr@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127793\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127793\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mgr@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmgr.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmgr.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-mgr : get enabled modules from ceph-mgr] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:19\nFriday 22 June 2018 09:14:57 -0400 (0:00:00.805) 0:01:50.907 *********** \nchanged: [controller-0 -> 192.168.24.8] => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"--format\", \"json\", \"mgr\", \"module\", \"ls\"], \"delta\": \"0:00:00.389752\", \"end\": \"2018-06-22 13:14:58.094029\", \"rc\": 0, \"start\": \"2018-06-22 13:14:57.704277\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"enabled_modules\\\":[\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[]}\", \"stdout_lines\": [\"\", \"{\\\"enabled_modules\\\":[\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[]}\"]}\n\nTASK [ceph-mgr : set _ceph_mgr_modules fact] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:26\nFriday 22 June 2018 09:14:58 -0400 (0:00:00.954) 0:01:51.862 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_ceph_mgr_modules\": {\"disabled_modules\": [], \"enabled_modules\": [\"restful\", \"status\"]}}, \"changed\": false}\n\nTASK [ceph-mgr : disable ceph mgr enabled modules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:30\nFriday 22 June 2018 09:14:58 -0400 (0:00:00.105) 0:01:51.967 *********** \nchanged: [controller-0 -> 192.168.24.8] => (item=restful) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"restful\"], \"delta\": \"0:00:01.349993\", \"end\": \"2018-06-22 13:15:00.100317\", \"item\": \"restful\", \"rc\": 0, \"start\": \"2018-06-22 13:14:58.750324\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nskipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : add modules to ceph-mgr] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:41\nFriday 22 June 2018 09:15:00 -0400 (0:00:01.948) 0:01:53.916 *********** \nskipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nFriday 22 June 2018 09:15:00 -0400 (0:00:00.027) 0:01:53.943 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nFriday 22 June 2018 09:15:00 -0400 (0:00:00.064) 0:01:54.008 *********** \nok: [controller-0] => {\"changed\": false, \"checksum\": \"f36b3460f6762a853a3dab1958afb7d83ff8f234\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"mode\": \"0750\", \"owner\": \"root\", \"path\": \"/tmp/restart_mgr_daemon.sh\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 570, \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nFriday 22 June 2018 09:15:02 -0400 (0:00:01.995) 0:01:56.003 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nFriday 22 June 2018 09:15:02 -0400 (0:00:00.083) 0:01:56.087 *********** \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nFriday 22 June 2018 09:15:02 -0400 (0:00:00.126) 0:01:56.213 *********** \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [set ceph manager install 'Complete'] *************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:129\nFriday 22 June 2018 09:15:02 -0400 (0:00:00.093) 0:01:56.306 *********** \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"end\": \"20180622091502Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nPLAY [osds] ********************************************************************\n\nTASK [set ceph osd install 'In Progress'] **************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:141\nFriday 22 June 2018 09:15:02 -0400 (0:00:00.146) 0:01:56.453 *********** \nok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"start\": \"20180622091502Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nFriday 22 June 2018 09:15:02 -0400 (0:00:00.068) 0:01:56.521 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nFriday 22 June 2018 09:15:02 -0400 (0:00:00.040) 0:01:56.562 *********** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-osd-ceph-0\"], \"delta\": \"0:00:00.024219\", \"end\": \"2018-06-22 13:15:03.307661\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:15:03.283442\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nFriday 22 June 2018 09:15:03 -0400 (0:00:00.498) 0:01:57.060 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nFriday 22 June 2018 09:15:03 -0400 (0:00:00.042) 0:01:57.103 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nFriday 22 June 2018 09:15:03 -0400 (0:00:00.039) 0:01:57.143 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nFriday 22 June 2018 09:15:03 -0400 (0:00:00.040) 0:01:57.183 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nFriday 22 June 2018 09:15:03 -0400 (0:00:00.039) 0:01:57.222 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nFriday 22 June 2018 09:15:03 -0400 (0:00:00.041) 0:01:57.263 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nFriday 22 June 2018 09:15:03 -0400 (0:00:00.046) 0:01:57.310 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nFriday 22 June 2018 09:15:03 -0400 (0:00:00.040) 0:01:57.350 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nFriday 22 June 2018 09:15:03 -0400 (0:00:00.039) 0:01:57.389 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nFriday 22 June 2018 09:15:03 -0400 (0:00:00.037) 0:01:57.427 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nFriday 22 June 2018 09:15:03 -0400 (0:00:00.036) 0:01:57.463 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nFriday 22 June 2018 09:15:03 -0400 (0:00:00.035) 0:01:57.498 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nFriday 22 June 2018 09:15:03 -0400 (0:00:00.038) 0:01:57.537 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nFriday 22 June 2018 09:15:03 -0400 (0:00:00.197) 0:01:57.735 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nFriday 22 June 2018 09:15:04 -0400 (0:00:00.040) 0:01:57.775 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nFriday 22 June 2018 09:15:04 -0400 (0:00:00.039) 0:01:57.815 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nFriday 22 June 2018 09:15:04 -0400 (0:00:00.037) 0:01:57.852 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nFriday 22 June 2018 09:15:04 -0400 (0:00:00.043) 0:01:57.896 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nFriday 22 June 2018 09:15:04 -0400 (0:00:00.040) 0:01:57.937 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nFriday 22 June 2018 09:15:04 -0400 (0:00:00.045) 0:01:57.982 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nFriday 22 June 2018 09:15:04 -0400 (0:00:00.038) 0:01:58.021 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nFriday 22 June 2018 09:15:04 -0400 (0:00:00.038) 0:01:58.060 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nFriday 22 June 2018 09:15:04 -0400 (0:00:00.039) 0:01:58.099 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nFriday 22 June 2018 09:15:04 -0400 (0:00:00.037) 0:01:58.137 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nFriday 22 June 2018 09:15:04 -0400 (0:00:00.036) 0:01:58.174 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nFriday 22 June 2018 09:15:04 -0400 (0:00:00.042) 0:01:58.216 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nFriday 22 June 2018 09:15:04 -0400 (0:00:00.040) 0:01:58.257 *********** \nok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nFriday 22 June 2018 09:15:04 -0400 (0:00:00.471) 0:01:58.729 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nFriday 22 June 2018 09:15:05 -0400 (0:00:00.068) 0:01:58.798 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nFriday 22 June 2018 09:15:05 -0400 (0:00:00.066) 0:01:58.864 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nFriday 22 June 2018 09:15:05 -0400 (0:00:00.069) 0:01:58.934 *********** \nok: [ceph-0 -> 192.168.24.8] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nFriday 22 June 2018 09:15:05 -0400 (0:00:00.129) 0:01:59.063 *********** \nok: [ceph-0 -> 192.168.24.8] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.325414\", \"end\": \"2018-06-22 13:15:06.137908\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:15:05.812494\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"53912472-747b-11e8-95a3-5254003d7dcb\", \"stdout_lines\": [\"53912472-747b-11e8-95a3-5254003d7dcb\"]}\n\nTASK [ceph-defaults : check if /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nFriday 22 June 2018 09:15:06 -0400 (0:00:00.843) 0:01:59.907 *********** \nok: [ceph-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nFriday 22 June 2018 09:15:06 -0400 (0:00:00.197) 0:02:00.104 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nFriday 22 June 2018 09:15:06 -0400 (0:00:00.047) 0:02:00.152 *********** \nok: [ceph-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 80, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nFriday 22 June 2018 09:15:06 -0400 (0:00:00.197) 0:02:00.349 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"fsid\": \"53912472-747b-11e8-95a3-5254003d7dcb\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nFriday 22 June 2018 09:15:06 -0400 (0:00:00.072) 0:02:00.422 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85\nFriday 22 June 2018 09:15:06 -0400 (0:00:00.068) 0:02:00.490 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96\nFriday 22 June 2018 09:15:06 -0400 (0:00:00.040) 0:02:00.530 *********** \nok: [ceph-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 53912472-747b-11e8-95a3-5254003d7dcb | tee /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105\nFriday 22 June 2018 09:15:06 -0400 (0:00:00.194) 0:02:00.724 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117\nFriday 22 June 2018 09:15:06 -0400 (0:00:00.038) 0:02:00.763 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123\nFriday 22 June 2018 09:15:07 -0400 (0:00:00.039) 0:02:00.802 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"mds_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129\nFriday 22 June 2018 09:15:07 -0400 (0:00:00.075) 0:02:00.878 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135\nFriday 22 June 2018 09:15:07 -0400 (0:00:00.049) 0:02:00.927 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nFriday 22 June 2018 09:15:07 -0400 (0:00:00.068) 0:02:00.996 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nFriday 22 June 2018 09:15:07 -0400 (0:00:00.065) 0:02:01.061 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nFriday 22 June 2018 09:15:07 -0400 (0:00:00.067) 0:02:01.129 *********** \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.003396\", \"end\": \"2018-06-22 13:15:07.879754\", \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-06-22 13:15:07.876358\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166\nFriday 22 June 2018 09:15:07 -0400 (0:00:00.512) 0:02:01.641 *********** \nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-06-22 13:15:07.879754', '_ansible_no_log': False, u'stdout': u'/dev/vdb', u'cmd': [u'readlink', u'-f', u'/dev/vdb'], u'rc': 0, 'item': u'/dev/vdb', u'delta': u'0:00:00.003396', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdb', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdb'], u'start': u'2018-06-22 13:15:07.876358', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdb\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.003396\", \"end\": \"2018-06-22 13:15:07.879754\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-06-22 13:15:07.876358\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175\nFriday 22 June 2018 09:15:07 -0400 (0:00:00.090) 0:02:01.732 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\"]}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183\nFriday 22 June 2018 09:15:08 -0400 (0:00:00.080) 0:02:01.812 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nFriday 22 June 2018 09:15:08 -0400 (0:00:00.044) 0:02:01.857 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nFriday 22 June 2018 09:15:08 -0400 (0:00:00.043) 0:02:01.900 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nFriday 22 June 2018 09:15:08 -0400 (0:00:00.042) 0:02:01.943 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nFriday 22 June 2018 09:15:08 -0400 (0:00:00.042) 0:02:01.985 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nFriday 22 June 2018 09:15:08 -0400 (0:00:00.080) 0:02:02.065 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nFriday 22 June 2018 09:15:08 -0400 (0:00:00.067) 0:02:02.133 *********** \nchanged: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nFriday 22 June 2018 09:15:13 -0400 (0:00:05.077) 0:02:07.211 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nFriday 22 June 2018 09:15:13 -0400 (0:00:00.042) 0:02:07.253 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nFriday 22 June 2018 09:15:13 -0400 (0:00:00.039) 0:02:07.292 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nFriday 22 June 2018 09:15:13 -0400 (0:00:00.038) 0:02:07.331 *********** \nok: [ceph-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [ceph-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nFriday 22 June 2018 09:15:14 -0400 (0:00:00.875) 0:02:08.207 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nFriday 22 June 2018 09:15:14 -0400 (0:00:00.068) 0:02:08.276 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nFriday 22 June 2018 09:15:14 -0400 (0:00:00.038) 0:02:08.315 *********** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.024004\", \"end\": \"2018-06-22 13:15:15.045795\", \"rc\": 0, \"start\": \"2018-06-22 13:15:15.021791\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nFriday 22 June 2018 09:15:15 -0400 (0:00:00.486) 0:02:08.801 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nFriday 22 June 2018 09:15:15 -0400 (0:00:00.170) 0:02:08.972 *********** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-ceph-0\"], \"delta\": \"0:00:00.026604\", \"end\": \"2018-06-22 13:15:15.819480\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:15:15.792876\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nFriday 22 June 2018 09:15:15 -0400 (0:00:00.601) 0:02:09.573 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nFriday 22 June 2018 09:15:15 -0400 (0:00:00.090) 0:02:09.664 *********** \nok: [ceph-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nFriday 22 June 2018 09:15:16 -0400 (0:00:00.220) 0:02:09.884 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nFriday 22 June 2018 09:15:16 -0400 (0:00:00.187) 0:02:10.072 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nFriday 22 June 2018 09:15:16 -0400 (0:00:00.187) 0:02:10.259 *********** \nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1529673251.412, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"793b49d83f132a70fc67d6c0569cfa8c71650741\", \"ctime\": 1529673251.412, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 29440356, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673251.412, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1529673251.858, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"dae692cfee0fa0a32ffaad10f7d24e310a009db9\", \"ctime\": 1529673251.858, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 29440357, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673251.858, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1529673252.32, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"d8a7f9eb9d9dc0395da75fc7759797ea97e335aa\", \"ctime\": 1529673252.32, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 46404843, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673252.32, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1529673252.774, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"9613a61f8c01ce2de5a65853e6a5574e32ab15c0\", \"ctime\": 1529673252.774, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 51235195, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673252.774, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1529673253.23, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"11de432a77f2de2b2705ea5780f568345ba62116\", \"ctime\": 1529673253.23, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 56054668, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673253.23, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1529673253.677, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"fa627b4b6c0e4d6b86f16984405cd43c6dd3021c\", \"ctime\": 1529673253.677, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 58720433, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673253.677, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1529673290.805, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"f1eb3e81a4f49f68787b67580eb8b9601f3e1e36\", \"ctime\": 1529673255.881, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 29440358, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673255.881, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nFriday 22 June 2018 09:15:17 -0400 (0:00:01.315) 0:02:11.575 *********** \nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673251.412, u'block_size': 4096, u'inode': 29440356, u'isgid': False, u'size': 159, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'xusr': False, u'atime': 1529673251.412, u'isdir': False, u'ctime': 1529673251.412, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'793b49d83f132a70fc67d6c0569cfa8c71650741', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1529673251.412, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"793b49d83f132a70fc67d6c0569cfa8c71650741\", \"ctime\": 1529673251.412, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 29440356, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673251.412, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673251.858, u'block_size': 4096, u'inode': 29440357, u'isgid': False, u'size': 688, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'xusr': False, u'atime': 1529673251.858, u'isdir': False, u'ctime': 1529673251.858, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'dae692cfee0fa0a32ffaad10f7d24e310a009db9', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1529673251.858, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"dae692cfee0fa0a32ffaad10f7d24e310a009db9\", \"ctime\": 1529673251.858, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 29440357, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673251.858, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673252.32, u'block_size': 4096, u'inode': 46404843, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'xusr': False, u'atime': 1529673252.32, u'isdir': False, u'ctime': 1529673252.32, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'd8a7f9eb9d9dc0395da75fc7759797ea97e335aa', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1529673252.32, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"d8a7f9eb9d9dc0395da75fc7759797ea97e335aa\", \"ctime\": 1529673252.32, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 46404843, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673252.32, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673252.774, u'block_size': 4096, u'inode': 51235195, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'xusr': False, u'atime': 1529673252.774, u'isdir': False, u'ctime': 1529673252.774, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'9613a61f8c01ce2de5a65853e6a5574e32ab15c0', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1529673252.774, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"9613a61f8c01ce2de5a65853e6a5574e32ab15c0\", \"ctime\": 1529673252.774, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 51235195, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673252.774, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673253.23, u'block_size': 4096, u'inode': 56054668, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'xusr': False, u'atime': 1529673253.23, u'isdir': False, u'ctime': 1529673253.23, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'11de432a77f2de2b2705ea5780f568345ba62116', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1529673253.23, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"11de432a77f2de2b2705ea5780f568345ba62116\", \"ctime\": 1529673253.23, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 56054668, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673253.23, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673253.677, u'block_size': 4096, u'inode': 58720433, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'xusr': False, u'atime': 1529673253.677, u'isdir': False, u'ctime': 1529673253.677, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'fa627b4b6c0e4d6b86f16984405cd43c6dd3021c', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1529673253.677, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"fa627b4b6c0e4d6b86f16984405cd43c6dd3021c\", \"ctime\": 1529673253.677, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 58720433, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673253.677, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673255.881, u'block_size': 4096, u'inode': 29440358, u'isgid': False, u'size': 67, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1529673290.805, u'isdir': False, u'ctime': 1529673255.881, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'f1eb3e81a4f49f68787b67580eb8b9601f3e1e36', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1529673290.805, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"f1eb3e81a4f49f68787b67580eb8b9601f3e1e36\", \"ctime\": 1529673255.881, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 29440358, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673255.881, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.265) 0:02:11.840 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.039) 0:02:11.880 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.038) 0:02:11.918 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.044) 0:02:11.962 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.050) 0:02:12.013 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.043) 0:02:12.056 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.042) 0:02:12.099 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.042) 0:02:12.142 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.041) 0:02:12.183 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.040) 0:02:12.223 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.055) 0:02:12.279 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.171) 0:02:12.451 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.038) 0:02:12.489 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.039) 0:02:12.529 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.038) 0:02:12.567 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.039) 0:02:12.606 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.037) 0:02:12.644 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.051) 0:02:12.696 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nFriday 22 June 2018 09:15:18 -0400 (0:00:00.041) 0:02:12.738 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nFriday 22 June 2018 09:15:19 -0400 (0:00:00.041) 0:02:12.779 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nFriday 22 June 2018 09:15:19 -0400 (0:00:00.041) 0:02:12.820 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nFriday 22 June 2018 09:15:19 -0400 (0:00:00.040) 0:02:12.860 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nFriday 22 June 2018 09:15:19 -0400 (0:00:00.039) 0:02:12.899 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nFriday 22 June 2018 09:15:19 -0400 (0:00:00.048) 0:02:12.948 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nFriday 22 June 2018 09:15:19 -0400 (0:00:00.043) 0:02:12.991 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nFriday 22 June 2018 09:15:19 -0400 (0:00:00.038) 0:02:13.030 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nFriday 22 June 2018 09:15:19 -0400 (0:00:00.038) 0:02:13.069 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nFriday 22 June 2018 09:15:19 -0400 (0:00:00.038) 0:02:13.107 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nFriday 22 June 2018 09:15:19 -0400 (0:00:00.038) 0:02:13.145 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nFriday 22 June 2018 09:15:19 -0400 (0:00:00.043) 0:02:13.188 *********** \nok: [ceph-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:16.249972\", \"end\": \"2018-06-22 13:15:36.140819\", \"rc\": 0, \"start\": \"2018-06-22 13:15:19.890847\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Verifying Checksum\\nb8aa42cec17a: Download complete\\n9a32f102e677: Verifying Checksum\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Verifying Checksum\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Verifying Checksum\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nFriday 22 June 2018 09:15:36 -0400 (0:00:16.716) 0:02:29.905 *********** \nchanged: [ceph-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.024248\", \"end\": \"2018-06-22 13:15:36.638861\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:15:36.614613\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/c3baf43ba63707bde52d6ad9875b8992dcd03576bd8e11611ec48eabc599b419/diff:/var/lib/docker/overlay2/0589eead877a238570964f90f9ccd2a9e5b5e3bfb54b187631f8d5930e5c180d/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8956de1a6cc0965320854f422c6c844143e0985b70a1be35de566f04a1040756/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8956de1a6cc0965320854f422c6c844143e0985b70a1be35de566f04a1040756/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8956de1a6cc0965320854f422c6c844143e0985b70a1be35de566f04a1040756/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/c3baf43ba63707bde52d6ad9875b8992dcd03576bd8e11611ec48eabc599b419/diff:/var/lib/docker/overlay2/0589eead877a238570964f90f9ccd2a9e5b5e3bfb54b187631f8d5930e5c180d/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8956de1a6cc0965320854f422c6c844143e0985b70a1be35de566f04a1040756/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8956de1a6cc0965320854f422c6c844143e0985b70a1be35de566f04a1040756/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8956de1a6cc0965320854f422c6c844143e0985b70a1be35de566f04a1040756/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nFriday 22 June 2018 09:15:36 -0400 (0:00:00.501) 0:02:30.407 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nFriday 22 June 2018 09:15:36 -0400 (0:00:00.076) 0:02:30.483 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nFriday 22 June 2018 09:15:36 -0400 (0:00:00.042) 0:02:30.526 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nFriday 22 June 2018 09:15:36 -0400 (0:00:00.050) 0:02:30.576 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nFriday 22 June 2018 09:15:36 -0400 (0:00:00.043) 0:02:30.620 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nFriday 22 June 2018 09:15:36 -0400 (0:00:00.042) 0:02:30.663 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nFriday 22 June 2018 09:15:36 -0400 (0:00:00.043) 0:02:30.706 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nFriday 22 June 2018 09:15:36 -0400 (0:00:00.042) 0:02:30.748 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nFriday 22 June 2018 09:15:37 -0400 (0:00:00.045) 0:02:30.793 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nFriday 22 June 2018 09:15:37 -0400 (0:00:00.046) 0:02:30.840 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nFriday 22 June 2018 09:15:37 -0400 (0:00:00.044) 0:02:30.884 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nFriday 22 June 2018 09:15:37 -0400 (0:00:00.044) 0:02:30.928 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nFriday 22 June 2018 09:15:37 -0400 (0:00:00.042) 0:02:30.971 *********** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.580409\", \"end\": \"2018-06-22 13:15:38.322819\", \"rc\": 0, \"start\": \"2018-06-22 13:15:37.742410\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nFriday 22 June 2018 09:15:38 -0400 (0:00:01.119) 0:02:32.091 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nFriday 22 June 2018 09:15:38 -0400 (0:00:00.073) 0:02:32.164 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nFriday 22 June 2018 09:15:38 -0400 (0:00:00.046) 0:02:32.210 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nFriday 22 June 2018 09:15:38 -0400 (0:00:00.047) 0:02:32.258 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nFriday 22 June 2018 09:15:38 -0400 (0:00:00.077) 0:02:32.336 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nFriday 22 June 2018 09:15:38 -0400 (0:00:00.044) 0:02:32.381 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nFriday 22 June 2018 09:15:38 -0400 (0:00:00.048) 0:02:32.429 *********** \nchanged: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nFriday 22 June 2018 09:15:40 -0400 (0:00:02.211) 0:02:34.641 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nFriday 22 June 2018 09:15:40 -0400 (0:00:00.043) 0:02:34.684 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nFriday 22 June 2018 09:15:40 -0400 (0:00:00.044) 0:02:34.728 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nFriday 22 June 2018 09:15:41 -0400 (0:00:00.054) 0:02:34.782 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nFriday 22 June 2018 09:15:41 -0400 (0:00:00.044) 0:02:34.827 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nFriday 22 June 2018 09:15:41 -0400 (0:00:00.038) 0:02:34.866 *********** \nchanged: [ceph-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nFriday 22 June 2018 09:15:41 -0400 (0:00:00.476) 0:02:35.342 *********** \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for ceph-0\nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"d45396dce38fd1819887516b5af41173fc14e408\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"5268a9201371c7a177ada3f251f5af2d\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 871, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673341.62-105295216168497/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nFriday 22 June 2018 09:15:44 -0400 (0:00:03.084) 0:02:38.427 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure public_network configured] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:2\nFriday 22 June 2018 09:15:44 -0400 (0:00:00.043) 0:02:38.471 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure cluster_network configured] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:8\nFriday 22 June 2018 09:15:44 -0400 (0:00:00.039) 0:02:38.510 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure journal_size configured] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:15\nFriday 22 June 2018 09:15:44 -0400 (0:00:00.041) 0:02:38.552 *********** \nok: [ceph-0] => {\n \"msg\": \"WARNING: journal_size is configured to 512, which is less than 5GB. This is not recommended and can lead to severe issues.\"\n}\n\nTASK [ceph-osd : make sure an osd scenario was chosen] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:23\nFriday 22 June 2018 09:15:44 -0400 (0:00:00.072) 0:02:38.625 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure a valid osd scenario was chosen] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:31\nFriday 22 June 2018 09:15:44 -0400 (0:00:00.044) 0:02:38.669 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify devices have been provided] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:39\nFriday 22 June 2018 09:15:44 -0400 (0:00:00.044) 0:02:38.714 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : check if osd_scenario lvm is supported by the selected ceph version] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:49\nFriday 22 June 2018 09:15:44 -0400 (0:00:00.050) 0:02:38.764 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify lvm_volumes have been provided] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:59\nFriday 22 June 2018 09:15:45 -0400 (0:00:00.044) 0:02:38.809 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the lvm_volumes variable is a list] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:69\nFriday 22 June 2018 09:15:45 -0400 (0:00:00.044) 0:02:38.853 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the devices variable is a list] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:79\nFriday 22 June 2018 09:15:45 -0400 (0:00:00.048) 0:02:38.901 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify dedicated devices have been provided] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:88\nFriday 22 June 2018 09:15:45 -0400 (0:00:00.047) 0:02:38.949 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the dedicated_devices variable is a list] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:98\nFriday 22 June 2018 09:15:45 -0400 (0:00:00.042) 0:02:38.991 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : check if bluestore is supported by the selected ceph version] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:109\nFriday 22 June 2018 09:15:45 -0400 (0:00:00.042) 0:02:39.034 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include system_tuning.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:5\nFriday 22 June 2018 09:15:45 -0400 (0:00:00.049) 0:02:39.084 *********** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml for ceph-0\n\nTASK [ceph-osd : disable osd directory parsing by updatedb] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:2\nFriday 22 June 2018 09:15:45 -0400 (0:00:00.068) 0:02:39.152 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : disable osd directory path in updatedb.conf] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:11\nFriday 22 June 2018 09:15:45 -0400 (0:00:00.038) 0:02:39.191 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : create tmpfiles.d directory] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:22\nFriday 22 June 2018 09:15:45 -0400 (0:00:00.039) 0:02:39.231 *********** \nok: [ceph-0] => {\"changed\": false, \"gid\": 0, \"group\": \"root\", \"mode\": \"0755\", \"owner\": \"root\", \"path\": \"/etc/tmpfiles.d\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 0}\n\nTASK [ceph-osd : disable transparent hugepage] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:33\nFriday 22 June 2018 09:15:45 -0400 (0:00:00.475) 0:02:39.706 *********** \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"e000059a4cfd8ce350b13f14305a46eaf99849ba\", \"dest\": \"/etc/tmpfiles.d/ceph_transparent_hugepage.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"21ac872f3aa1fb44b01d4f7ab00a35fc\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 158, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673345.97-243307488122427/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : get default vm.min_free_kbytes] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:45\nFriday 22 June 2018 09:15:48 -0400 (0:00:02.376) 0:02:42.083 *********** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"sysctl\", \"-b\", \"vm.min_free_kbytes\"], \"delta\": \"0:00:00.003596\", \"end\": \"2018-06-22 13:15:48.800700\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:15:48.797104\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"67584\", \"stdout_lines\": [\"67584\"]}\n\nTASK [ceph-osd : set_fact vm_min_free_kbytes] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:52\nFriday 22 June 2018 09:15:48 -0400 (0:00:00.470) 0:02:42.554 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"vm_min_free_kbytes\": \"67584\"}, \"changed\": false}\n\nTASK [ceph-osd : apply operating system tuning] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:56\nFriday 22 June 2018 09:15:48 -0400 (0:00:00.062) 0:02:42.616 *********** \nchanged: [ceph-0] => (item={u'enable': u\"(osd_objectstore == 'bluestore')\", u'name': u'fs.aio-max-nr', u'value': u'1048576'}) => {\"changed\": true, \"item\": {\"enable\": \"(osd_objectstore == 'bluestore')\", \"name\": \"fs.aio-max-nr\", \"value\": \"1048576\"}}\nchanged: [ceph-0] => (item={u'name': u'fs.file-max', u'value': 26234859}) => {\"changed\": true, \"item\": {\"name\": \"fs.file-max\", \"value\": 26234859}}\nchanged: [ceph-0] => (item={u'name': u'vm.zone_reclaim_mode', u'value': 0}) => {\"changed\": true, \"item\": {\"name\": \"vm.zone_reclaim_mode\", \"value\": 0}}\nchanged: [ceph-0] => (item={u'name': u'vm.swappiness', u'value': 10}) => {\"changed\": true, \"item\": {\"name\": \"vm.swappiness\", \"value\": 10}}\nchanged: [ceph-0] => (item={u'name': u'vm.min_free_kbytes', u'value': u'67584'}) => {\"changed\": true, \"item\": {\"name\": \"vm.min_free_kbytes\", \"value\": \"67584\"}}\n\nTASK [ceph-osd : install dependencies] *****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:10\nFriday 22 June 2018 09:15:51 -0400 (0:00:02.420) 0:02:45.037 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include common.yml] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:18\nFriday 22 June 2018 09:15:51 -0400 (0:00:00.038) 0:02:45.075 *********** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml for ceph-0\n\nTASK [ceph-osd : create bootstrap-osd and osd directories] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:2\nFriday 22 June 2018 09:15:51 -0400 (0:00:00.063) 0:02:45.139 *********** \nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nok: [ceph-0] => (item=/var/lib/ceph/osd/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-osd : copy ceph key(s) if needed] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:15\nFriday 22 June 2018 09:15:52 -0400 (0:00:00.886) 0:02:46.026 *********** \nchanged: [ceph-0] => (item={u'name': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"d8a7f9eb9d9dc0395da75fc7759797ea97e335aa\", \"dest\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"name\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\"}, \"md5sum\": \"5208039d17edb4ccda0d9023c061854b\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 113, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673352.3-30870827595785/source\", \"state\": \"file\", \"uid\": 167}\nskipping: [ceph-0] => (item={u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:2\nFriday 22 June 2018 09:15:54 -0400 (0:00:02.283) 0:02:48.309 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options 'ceph_disk_cli_options'] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:11\nFriday 22 June 2018 09:15:54 -0400 (0:00:00.038) 0:02:48.348 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph'] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:20\nFriday 22 June 2018 09:15:54 -0400 (0:00:00.049) 0:02:48.397 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore --dmcrypt'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:29\nFriday 22 June 2018 09:15:54 -0400 (0:00:00.048) 0:02:48.446 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --filestore --dmcrypt'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:38\nFriday 22 June 2018 09:15:54 -0400 (0:00:00.044) 0:02:48.491 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --dmcrypt'] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:47\nFriday 22 June 2018 09:15:54 -0400 (0:00:00.045) 0:02:48.537 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e KV_TYPE=etcd -e KV_IP=127.0.0.1 -e KV_PORT=2379'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:56\nFriday 22 June 2018 09:15:54 -0400 (0:00:00.042) 0:02:48.579 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:62\nFriday 22 June 2018 09:15:54 -0400 (0:00:00.039) 0:02:48.619 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"docker_env_args\": \"-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0\"}, \"changed\": false}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=1'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:70\nFriday 22 June 2018 09:15:54 -0400 (0:00:00.069) 0:02:48.688 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:78\nFriday 22 June 2018 09:15:54 -0400 (0:00:00.044) 0:02:48.732 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=1'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:86\nFriday 22 June 2018 09:15:55 -0400 (0:00:00.048) 0:02:48.781 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact devices generate device list when osd_auto_discovery] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:2\nFriday 22 June 2018 09:15:55 -0400 (0:00:00.041) 0:02:48.822 *********** \nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sectors': u'41943040', u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {u'vda1': {u'sectorsize': 512, u'uuid': u'2018-06-20-11-57-19-00', u'links': {u'masters': [], u'labels': [u'config-2'], u'ids': [], u'uuids': [u'2018-06-20-11-57-19-00']}, u'sectors': u'2048', u'start': u'2048', u'holders': [], u'size': u'1.00 MB'}, u'vda2': {u'sectorsize': 512, u'uuid': u'fca00eb7-6dba-4ea0-b1e5-202b819f2b85', u'links': {u'masters': [], u'labels': [u'img-rootfs'], u'ids': [], u'uuids': [u'fca00eb7-6dba-4ea0-b1e5-202b819f2b85']}, u'sectors': u'41938911', u'start': u'4096', u'holders': [], u'size': u'20.00 GB'}}, u'holders': [], u'size': u'20.00 GB'}, 'key': u'vda'}) => {\"changed\": false, \"item\": {\"key\": \"vda\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {\"vda1\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"config-2\"], \"masters\": [], \"uuids\": [\"2018-06-20-11-57-19-00\"]}, \"sectors\": \"2048\", \"sectorsize\": 512, \"size\": \"1.00 MB\", \"start\": \"2048\", \"uuid\": \"2018-06-20-11-57-19-00\"}, \"vda2\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"img-rootfs\"], \"masters\": [], \"uuids\": [\"fca00eb7-6dba-4ea0-b1e5-202b819f2b85\"]}, \"sectors\": \"41938911\", \"sectorsize\": 512, \"size\": \"20.00 GB\", \"start\": \"4096\", \"uuid\": \"fca00eb7-6dba-4ea0-b1e5-202b819f2b85\"}}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"41943040\", \"sectorsize\": \"512\", \"size\": \"20.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sectors': u'83886080', u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'40.00 GB'}, 'key': u'vdb'}) => {\"changed\": false, \"item\": {\"key\": \"vdb\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"83886080\", \"sectorsize\": \"512\", \"size\": \"40.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : resolve dedicated device link(s)] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:15\nFriday 22 June 2018 09:15:55 -0400 (0:00:00.059) 0:02:48.881 *********** \n\nTASK [ceph-osd : set_fact build dedicated_devices from resolved symlinks] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:24\nFriday 22 June 2018 09:15:55 -0400 (0:00:00.046) 0:02:48.927 *********** \n\nTASK [ceph-osd : set_fact build final dedicated_devices list] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:32\nFriday 22 June 2018 09:15:55 -0400 (0:00:00.045) 0:02:48.973 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : read information about the devices] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:29\nFriday 22 June 2018 09:15:55 -0400 (0:00:00.039) 0:02:49.013 *********** \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 40960.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\n\nTASK [ceph-osd : check the partition status of the osd disks] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:2\nFriday 22 June 2018 09:15:55 -0400 (0:00:00.722) 0:02:49.736 *********** \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:01.009297\", \"end\": \"2018-06-22 13:15:57.578932\", \"failed_when_result\": false, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:15:56.569635\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : create gpt disk label] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:11\nFriday 22 June 2018 09:15:57 -0400 (0:00:01.602) 0:02:51.338 *********** \nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdb'], u'end': u'2018-06-22 13:15:57.578932', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdb', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'start': u'2018-06-22 13:15:56.569635', u'delta': u'0:00:01.009297', 'item': u'/dev/vdb', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u'', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdb']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdb\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.012183\", \"end\": \"2018-06-22 13:15:58.183577\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:01.009297\", \"end\": \"2018-06-22 13:15:57.578932\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:15:56.569635\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-06-22 13:15:58.171394\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : include scenarios/collocated.yml] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:41\nFriday 22 June 2018 09:15:58 -0400 (0:00:00.607) 0:02:51.946 *********** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml for ceph-0\n\nTASK [ceph-osd : prepare ceph containerized osd disk collocated] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:5\nFriday 22 June 2018 09:15:58 -0400 (0:00:00.083) 0:02:52.030 *********** \nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, u'script': u\"unit 'MiB' print\", '_ansible_item_result': True, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 40960.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdb -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdb -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-6\", \"delta\": \"0:00:06.963994\", \"end\": \"2018-06-22 13:16:05.816535\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 40960.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-06-22 13:15:58.852541\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-06-22 13:15:59'\\n+common_functions.sh:13: log(): echo '2018-06-22 13:15:59 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid e97f941b-4aee-4d8d-9905-035cecb14b1e /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:e97f941b-4aee-4d8d-9905-035cecb14b1e --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/e97f941b-4aee-4d8d-9905-035cecb14b1e\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/e97f941b-4aee-4d8d-9905-035cecb14b1e\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdb\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:6f1cf919-f6ce-4f28-9ff2-a2010186b52e --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdb1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\\nmount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.tj5UdE with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.tj5UdE\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.tj5UdE\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.tj5UdE\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/ceph_fsid.30599.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/ceph_fsid.30599.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/fsid.30599.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/fsid.30599.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/magic.30599.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/magic.30599.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/journal_uuid.30599.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/journal_uuid.30599.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.tj5UdE/journal -> /dev/disk/by-partuuid/e97f941b-4aee-4d8d-9905-035cecb14b1e\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/type.30599.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/type.30599.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.tj5UdE\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.tj5UdE\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-06-22 13:15:59'\", \"+common_functions.sh:13: log(): echo '2018-06-22 13:15:59 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid e97f941b-4aee-4d8d-9905-035cecb14b1e /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:e97f941b-4aee-4d8d-9905-035cecb14b1e --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/e97f941b-4aee-4d8d-9905-035cecb14b1e\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/e97f941b-4aee-4d8d-9905-035cecb14b1e\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdb\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:6f1cf919-f6ce-4f28-9ff2-a2010186b52e --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdb1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\", \"mount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.tj5UdE with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.tj5UdE\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.tj5UdE\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.tj5UdE\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/ceph_fsid.30599.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/ceph_fsid.30599.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/fsid.30599.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/fsid.30599.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/magic.30599.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/magic.30599.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/journal_uuid.30599.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/journal_uuid.30599.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.tj5UdE/journal -> /dev/disk/by-partuuid/e97f941b-4aee-4d8d-9905-035cecb14b1e\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/type.30599.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/type.30599.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.tj5UdE\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.tj5UdE\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-06-22 13:15:59 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-06-22 13:15:59 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-06-22 13:15:59 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-06-22 13:15:59 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdb\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.lBMnxJz07c' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\\n2018-06-22 13:15:59 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdb1 isize=2048 agcount=4, agsize=2588607 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=10354427, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=5055, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdb2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdb1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-06-22 13:15:59 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-06-22 13:15:59 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-06-22 13:15:59 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-06-22 13:15:59 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdb\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.lBMnxJz07c' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\", \"2018-06-22 13:15:59 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdb1 isize=2048 agcount=4, agsize=2588607 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=10354427, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=5055, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdb2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdb1' from root:disk to ceph:ceph\"]}\n\nTASK [ceph-osd : automatic prepare ceph containerized osd disk collocated] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:30\nFriday 22 June 2018 09:16:05 -0400 (0:00:07.548) 0:02:59.578 *********** \nskipping: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"item\": \"/dev/vdb\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : manually prepare ceph \"filestore\" non-containerized osd disk(s) with collocated osd data and journal] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:53\nFriday 22 June 2018 09:16:05 -0400 (0:00:00.046) 0:02:59.625 *********** \nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, u'script': u\"unit 'MiB' print\", '_ansible_item_result': True, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 40960.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 40960.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include scenarios/non-collocated.yml] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:48\nFriday 22 June 2018 09:16:05 -0400 (0:00:00.053) 0:02:59.679 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include scenarios/lvm.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:56\nFriday 22 June 2018 09:16:05 -0400 (0:00:00.042) 0:02:59.721 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include activate_osds.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:64\nFriday 22 June 2018 09:16:05 -0400 (0:00:00.037) 0:02:59.759 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include start_osds.yml] ***************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:72\nFriday 22 June 2018 09:16:06 -0400 (0:00:00.043) 0:02:59.802 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include docker/main.yml] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:80\nFriday 22 June 2018 09:16:06 -0400 (0:00:00.040) 0:02:59.843 *********** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml for ceph-0\n\nTASK [ceph-osd : include start_docker_osd.yml] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml:2\nFriday 22 June 2018 09:16:06 -0400 (0:00:00.080) 0:02:59.924 *********** \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml for ceph-0\n\nTASK [ceph-osd : umount ceph disk (if on openstack)] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:4\nFriday 22 June 2018 09:16:06 -0400 (0:00:00.063) 0:02:59.987 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : test if the container image has the disk_list function] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:13\nFriday 22 June 2018 09:16:06 -0400 (0:00:00.038) 0:03:00.025 *********** \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint=stat\", \"192.168.24.1:8787/rhceph:3-6\", \"disk_list.sh\"], \"delta\": \"0:00:00.429719\", \"end\": \"2018-06-22 13:16:07.199214\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:16:06.769495\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: 'disk_list.sh'\\n Size: 3726 \\tBlocks: 8 IO Block: 4096 regular file\\nDevice: 2ah/42d\\tInode: 46189889 Links: 1\\nAccess: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\\nAccess: 2018-04-18 13:02:03.000000000 +0000\\nModify: 2018-04-18 13:02:03.000000000 +0000\\nChange: 2018-06-22 13:15:25.135445874 +0000\\n Birth: -\", \"stdout_lines\": [\" File: 'disk_list.sh'\", \" Size: 3726 \\tBlocks: 8 IO Block: 4096 regular file\", \"Device: 2ah/42d\\tInode: 46189889 Links: 1\", \"Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\", \"Access: 2018-04-18 13:02:03.000000000 +0000\", \"Modify: 2018-04-18 13:02:03.000000000 +0000\", \"Change: 2018-06-22 13:15:25.135445874 +0000\", \" Birth: -\"]}\n\nTASK [ceph-osd : generate ceph osd docker run script] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:19\nFriday 22 June 2018 09:16:07 -0400 (0:00:00.934) 0:03:00.960 *********** \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"6e2ae7f97fe861dbe9824133e6c912df4b7c8959\", \"dest\": \"/usr/share/ceph-osd-run.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"97ef03a63aca5a84f85a7a061ad42a61\", \"mode\": \"0744\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:usr_t:s0\", \"size\": 1000, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673367.23-199416710417990/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:28\nFriday 22 June 2018 09:16:09 -0400 (0:00:02.412) 0:03:03.372 *********** \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"b7abfb86a4af8d6e54d349965cae96bf9b995c49\", \"dest\": \"/etc/systemd/system/ceph-osd@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"8a53f95e6590750e7c4807589dd5864c\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 496, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673369.64-214659588146178/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : systemd start osd container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:39\nFriday 22 June 2018 09:16:12 -0400 (0:00:02.624) 0:03:05.997 *********** \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"enabled\": true, \"item\": \"/dev/vdb\", \"name\": \"ceph-osd@vdb\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"docker.service basic.target systemd-journald.socket system-ceph\\\\x5cx2dosd.slice\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdb.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"14904\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"14904\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdb.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-osd : set_fact openstack_keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:87\nFriday 22 June 2018 09:16:12 -0400 (0:00:00.728) 0:03:06.725 *********** \nok: [ceph-0] => (item={u'mon_cap': u'allow r', u'name': u'client.openstack', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', u'osd_cap': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'}) => {\"ansible_facts\": {\"openstack_keys_tmp\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}]}, \"changed\": false, \"item\": {\"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r\", \"name\": \"client.openstack\", \"osd_cap\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}}\nok: [ceph-0] => (item={u'mon_cap': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', u'mds_cap': u'allow *', u'name': u'client.manila', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', u'osd_cap': u'allow rw'}) => {\"ansible_facts\": {\"openstack_keys_tmp\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}]}, \"changed\": false, \"item\": {\"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mds_cap\": \"allow *\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"name\": \"client.manila\", \"osd_cap\": \"allow rw\"}}\nok: [ceph-0] => (item={u'mon_cap': u'allow rw', u'name': u'client.radosgw', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', u'osd_cap': u'allow rwx'}) => {\"ansible_facts\": {\"openstack_keys_tmp\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false, \"item\": {\"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow rw\", \"name\": \"client.radosgw\", \"osd_cap\": \"allow rwx\"}}\n\nTASK [ceph-osd : set_fact keys - override keys_tmp with keys] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:95\nFriday 22 June 2018 09:16:13 -0400 (0:00:00.108) 0:03:06.834 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"openstack_keys\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false}\n\nTASK [ceph-osd : wait for all osd to be up] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:2\nFriday 22 June 2018 09:16:13 -0400 (0:00:00.099) 0:03:06.933 *********** \nchanged: [ceph-0 -> 192.168.24.8] => {\"attempts\": 1, \"changed\": true, \"cmd\": \"test \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_osds\\\"])')\\\" = \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_up_osds\\\"])')\\\"\", \"delta\": \"0:00:00.761118\", \"end\": \"2018-06-22 13:16:14.540851\", \"rc\": 0, \"start\": \"2018-06-22 13:16:13.779733\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : list existing pool(s)] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:12\nFriday 22 June 2018 09:16:14 -0400 (0:00:01.411) 0:03:08.345 *********** \nchanged: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.386280\", \"end\": \"2018-06-22 13:16:15.541877\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:15.155597\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.8] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.371764\", \"end\": \"2018-06-22 13:16:16.417987\", \"failed_when_result\": false, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:16.046223\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.351274\", \"end\": \"2018-06-22 13:16:17.240806\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:16.889532\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.326659\", \"end\": \"2018-06-22 13:16:18.040070\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:17.713411\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.324626\", \"end\": \"2018-06-22 13:16:18.851610\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:18.526984\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : create openstack pool(s)] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:21\nFriday 22 June 2018 09:16:18 -0400 (0:00:04.310) 0:03:12.655 *********** \nok: [ceph-0 -> 192.168.24.8] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'images'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'images', u'size'], u'end': u'2018-06-22 13:16:15.541877', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-22 13:16:15.155597', u'delta': u'0:00:00.386280', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'images'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"images\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.933696\", \"end\": \"2018-06-22 13:16:20.396904\", \"item\": [{\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.8\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.386280\", \"end\": \"2018-06-22 13:16:15.541877\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:15.155597\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-22 13:16:19.463208\", \"stderr\": \"pool 'images' created\", \"stderr_lines\": [\"pool 'images' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.8] => (item=[{u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'metrics'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'metrics', u'size'], u'end': u'2018-06-22 13:16:16.417987', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-22 13:16:16.046223', u'delta': u'0:00:00.371764', 'item': {u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'metrics'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"metrics\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.893886\", \"end\": \"2018-06-22 13:16:21.887666\", \"item\": [{\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.8\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.371764\", \"end\": \"2018-06-22 13:16:16.417987\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:16.046223\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-22 13:16:20.993780\", \"stderr\": \"pool 'metrics' created\", \"stderr_lines\": [\"pool 'metrics' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.8] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'backups'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'backups', u'size'], u'end': u'2018-06-22 13:16:17.240806', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-22 13:16:16.889532', u'delta': u'0:00:00.351274', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'backups'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"backups\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.891178\", \"end\": \"2018-06-22 13:16:23.269395\", \"item\": [{\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.8\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.351274\", \"end\": \"2018-06-22 13:16:17.240806\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:16.889532\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-22 13:16:22.378217\", \"stderr\": \"pool 'backups' created\", \"stderr_lines\": [\"pool 'backups' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.8] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'vms'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'vms', u'size'], u'end': u'2018-06-22 13:16:18.040070', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-22 13:16:17.713411', u'delta': u'0:00:00.326659', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'vms'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"vms\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.912876\", \"end\": \"2018-06-22 13:16:24.668246\", \"item\": [{\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.8\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.326659\", \"end\": \"2018-06-22 13:16:18.040070\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:17.713411\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-22 13:16:23.755370\", \"stderr\": \"pool 'vms' created\", \"stderr_lines\": [\"pool 'vms' created\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.8] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'volumes'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'volumes', u'size'], u'end': u'2018-06-22 13:16:18.851610', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-22 13:16:18.526984', u'delta': u'0:00:00.324626', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'volumes'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"volumes\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.051271\", \"end\": \"2018-06-22 13:16:26.212069\", \"item\": [{\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.8\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.324626\", \"end\": \"2018-06-22 13:16:18.851610\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:18.526984\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-22 13:16:25.160798\", \"stderr\": \"pool 'volumes' created\", \"stderr_lines\": [\"pool 'volumes' created\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : assign application to pool(s)] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:41\nFriday 22 June 2018 09:16:26 -0400 (0:00:07.355) 0:03:20.011 *********** \nok: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"images\", \"rbd\"], \"delta\": \"0:00:01.321638\", \"end\": \"2018-06-22 13:16:28.239970\", \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:26.918332\", \"stderr\": \"enabled application 'rbd' on pool 'images'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.8] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"metrics\", \"openstack_gnocchi\"], \"delta\": \"0:00:00.500731\", \"end\": \"2018-06-22 13:16:29.211350\", \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:28.710619\", \"stderr\": \"enabled application 'openstack_gnocchi' on pool 'metrics'\", \"stderr_lines\": [\"enabled application 'openstack_gnocchi' on pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"backups\", \"rbd\"], \"delta\": \"0:00:00.528652\", \"end\": \"2018-06-22 13:16:30.205816\", \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:29.677164\", \"stderr\": \"enabled application 'rbd' on pool 'backups'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"vms\", \"rbd\"], \"delta\": \"0:00:00.541306\", \"end\": \"2018-06-22 13:16:31.225138\", \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:30.683832\", \"stderr\": \"enabled application 'rbd' on pool 'vms'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"volumes\", \"rbd\"], \"delta\": \"0:00:00.540333\", \"end\": \"2018-06-22 13:16:32.252575\", \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:31.712242\", \"stderr\": \"enabled application 'rbd' on pool 'volumes'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : create openstack cephx key(s)] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:50\nFriday 22 June 2018 09:16:32 -0400 (0:00:06.038) 0:03:26.049 *********** \nchanged: [ceph-0 -> 192.168.24.8] => (item={'caps': {'mds': u'', 'osd': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', 'mon': u'allow r', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', 'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.openstack.keyring\"], \"delta\": \"0:00:00.835956\", \"end\": \"2018-06-22 13:16:33.888266\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:33.052310\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.8] => (item={'caps': {'mds': u'allow *', 'osd': u'allow rw', 'mon': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', 'mgr': u'allow *'}, 'name': u'client.manila', 'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', 'mode': u'0600'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.manila.keyring\"], \"delta\": \"0:00:00.773456\", \"end\": \"2018-06-22 13:16:35.134056\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:34.360600\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.8] => (item={'caps': {'mds': u'', 'osd': u'allow rwx', 'mon': u'allow rw', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', 'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.radosgw.keyring\"], \"delta\": \"0:00:00.759743\", \"end\": \"2018-06-22 13:16:36.365983\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:35.606240\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : fetch openstack cephx key(s)] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:63\nFriday 22 June 2018 09:16:36 -0400 (0:00:04.104) 0:03:30.154 *********** \nchanged: [ceph-0 -> 192.168.24.8] => (item={'caps': {'mds': u'', 'osd': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', 'mon': u'allow r', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', 'name': u'client.openstack'}) => {\"changed\": true, \"checksum\": \"e8b2bdc53999aaa7ddcfb199e3722cc6d2ddde91\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.client.openstack.keyring\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"md5sum\": \"566356fccefb4488e70a2e9e03c00c1e\", \"remote_checksum\": \"e8b2bdc53999aaa7ddcfb199e3722cc6d2ddde91\", \"remote_md5sum\": null}\nchanged: [ceph-0 -> 192.168.24.8] => (item={'caps': {'mds': u'allow *', 'osd': u'allow rw', 'mon': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', 'mgr': u'allow *'}, 'name': u'client.manila', 'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', 'mode': u'0600'}) => {\"changed\": true, \"checksum\": \"f4862790452df4e779b0fe4b180c86014cb1da5d\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.client.manila.keyring\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"md5sum\": \"6cdea25af14b36920e2bf08f8511bc2a\", \"remote_checksum\": \"f4862790452df4e779b0fe4b180c86014cb1da5d\", \"remote_md5sum\": null}\nchanged: [ceph-0 -> 192.168.24.8] => (item={'caps': {'mds': u'', 'osd': u'allow rwx', 'mon': u'allow rw', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', 'name': u'client.radosgw'}) => {\"changed\": true, \"checksum\": \"cd5b07c38b4be9fb966b57a01d3c261899cb78ca\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.client.radosgw.keyring\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"md5sum\": \"25d9851a517ff9a4c090a62ec2a3cc5c\", \"remote_checksum\": \"cd5b07c38b4be9fb966b57a01d3c261899cb78ca\", \"remote_md5sum\": null}\n\nTASK [ceph-osd : copy to other mons the openstack cephx key(s)] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:71\nFriday 22 June 2018 09:16:37 -0400 (0:00:01.490) 0:03:31.644 *********** \nchanged: [ceph-0 -> 192.168.24.8] => (item=[u'controller-0', {'name': u'client.openstack', 'mode': u'0600', 'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', 'caps': {'mds': u'', 'osd': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', 'mgr': u'allow *', 'mon': u'allow r'}}]) => {\"changed\": true, \"checksum\": \"e8b2bdc53999aaa7ddcfb199e3722cc6d2ddde91\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.openstack.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 299, \"state\": \"file\", \"uid\": 167}\nchanged: [ceph-0 -> 192.168.24.8] => (item=[u'controller-0', {'mode': u'0600', 'name': u'client.manila', 'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', 'caps': {'mds': u'allow *', 'osd': u'allow rw', 'mgr': u'allow *', 'mon': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"'}}]) => {\"changed\": true, \"checksum\": \"f4862790452df4e779b0fe4b180c86014cb1da5d\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.manila.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 276, \"state\": \"file\", \"uid\": 167}\nchanged: [ceph-0 -> 192.168.24.8] => (item=[u'controller-0', {'name': u'client.radosgw', 'mode': u'0600', 'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', 'caps': {'mds': u'', 'osd': u'allow rwx', 'mgr': u'allow *', 'mon': u'allow rw'}}]) => {\"changed\": true, \"checksum\": \"cd5b07c38b4be9fb966b57a01d3c261899cb78ca\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 149, \"state\": \"file\", \"uid\": 167}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nFriday 22 June 2018 09:16:43 -0400 (0:00:05.407) 0:03:37.052 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nFriday 22 June 2018 09:16:43 -0400 (0:00:00.059) 0:03:37.112 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nFriday 22 June 2018 09:16:43 -0400 (0:00:00.037) 0:03:37.149 *********** \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nFriday 22 June 2018 09:16:43 -0400 (0:00:00.070) 0:03:37.220 *********** \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nFriday 22 June 2018 09:16:43 -0400 (0:00:00.067) 0:03:37.288 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nFriday 22 June 2018 09:16:43 -0400 (0:00:00.057) 0:03:37.346 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nFriday 22 June 2018 09:16:43 -0400 (0:00:00.058) 0:03:37.404 *********** \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"9a770971b362c519fc75c5228fc22dd8d4cc68aa\", \"dest\": \"/tmp/restart_osd_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"c42d82e9b9c002f16b40c524607c38ea\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 3060, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673403.7-133197522237299/source\", \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nFriday 22 June 2018 09:16:45 -0400 (0:00:02.348) 0:03:39.753 *********** \nskipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.070) 0:03:39.824 *********** \nskipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.075) 0:03:39.899 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.064) 0:03:39.964 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.063) 0:03:40.027 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.040) 0:03:40.068 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.046) 0:03:40.114 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.050) 0:03:40.164 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.059) 0:03:40.224 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.059) 0:03:40.283 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.036) 0:03:40.319 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.045) 0:03:40.365 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.048) 0:03:40.414 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.057) 0:03:40.471 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.060) 0:03:40.531 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.040) 0:03:40.572 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.046) 0:03:40.618 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.046) 0:03:40.664 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nFriday 22 June 2018 09:16:46 -0400 (0:00:00.058) 0:03:40.723 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.061) 0:03:40.784 *********** \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.039) 0:03:40.824 *********** \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.073) 0:03:40.897 *********** \nskipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.073) 0:03:40.971 *********** \nok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [set ceph osd install 'Complete'] *****************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:156\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.081) 0:03:41.052 *********** \nok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"end\": \"20180622091647Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nPLAY [mdss] ********************************************************************\nskipping: no hosts matched\n\nPLAY [rgws] ********************************************************************\nskipping: no hosts matched\n\nPLAY [nfss] ********************************************************************\nskipping: no hosts matched\n\nPLAY [rbdmirrors] **************************************************************\nskipping: no hosts matched\n\nPLAY [restapis] ****************************************************************\nskipping: no hosts matched\n\nPLAY [clients] *****************************************************************\n\nTASK [set ceph client install 'In Progress'] ***********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:307\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.144) 0:03:41.196 *********** \nok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"start\": \"20180622091647Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.070) 0:03:41.267 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.041) 0:03:41.308 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.038) 0:03:41.347 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.048) 0:03:41.395 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.042) 0:03:41.438 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.040) 0:03:41.478 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.041) 0:03:41.519 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.039) 0:03:41.558 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.036) 0:03:41.595 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.043) 0:03:41.638 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.039) 0:03:41.678 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.040) 0:03:41.718 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nFriday 22 June 2018 09:16:47 -0400 (0:00:00.039) 0:03:41.758 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.041) 0:03:41.800 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.042) 0:03:41.842 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.047) 0:03:41.889 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.042) 0:03:41.932 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.041) 0:03:41.973 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.040) 0:03:42.013 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.041) 0:03:42.055 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.040) 0:03:42.095 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.045) 0:03:42.141 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.039) 0:03:42.180 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.041) 0:03:42.221 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.039) 0:03:42.261 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.039) 0:03:42.300 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.037) 0:03:42.338 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.043) 0:03:42.381 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nFriday 22 June 2018 09:16:48 -0400 (0:00:00.039) 0:03:42.421 *********** \nok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nFriday 22 June 2018 09:16:49 -0400 (0:00:00.601) 0:03:43.023 *********** \nok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nFriday 22 June 2018 09:16:49 -0400 (0:00:00.069) 0:03:43.092 *********** \nok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nFriday 22 June 2018 09:16:49 -0400 (0:00:00.187) 0:03:43.280 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nFriday 22 June 2018 09:16:49 -0400 (0:00:00.067) 0:03:43.347 *********** \nok: [compute-0 -> 192.168.24.8] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nFriday 22 June 2018 09:16:49 -0400 (0:00:00.234) 0:03:43.581 *********** \nok: [compute-0 -> 192.168.24.8] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.358092\", \"end\": \"2018-06-22 13:16:50.792922\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:16:50.434830\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"53912472-747b-11e8-95a3-5254003d7dcb\", \"stdout_lines\": [\"53912472-747b-11e8-95a3-5254003d7dcb\"]}\n\nTASK [ceph-defaults : check if /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nFriday 22 June 2018 09:16:50 -0400 (0:00:00.977) 0:03:44.559 *********** \nok: [compute-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nFriday 22 June 2018 09:16:50 -0400 (0:00:00.196) 0:03:44.755 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nFriday 22 June 2018 09:16:51 -0400 (0:00:00.055) 0:03:44.810 *********** \nok: [compute-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 80, \"state\": \"directory\", \"uid\": 988}\n\nTASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nFriday 22 June 2018 09:16:51 -0400 (0:00:00.216) 0:03:45.027 *********** \nok: [compute-0] => {\"ansible_facts\": {\"fsid\": \"53912472-747b-11e8-95a3-5254003d7dcb\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nFriday 22 June 2018 09:16:51 -0400 (0:00:00.076) 0:03:45.104 *********** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85\nFriday 22 June 2018 09:16:51 -0400 (0:00:00.078) 0:03:45.183 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96\nFriday 22 June 2018 09:16:51 -0400 (0:00:00.045) 0:03:45.228 *********** \nok: [compute-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 53912472-747b-11e8-95a3-5254003d7dcb | tee /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105\nFriday 22 June 2018 09:16:51 -0400 (0:00:00.199) 0:03:45.428 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117\nFriday 22 June 2018 09:16:51 -0400 (0:00:00.044) 0:03:45.472 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123\nFriday 22 June 2018 09:16:51 -0400 (0:00:00.046) 0:03:45.519 *********** \nok: [compute-0] => {\"ansible_facts\": {\"mds_name\": \"compute-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129\nFriday 22 June 2018 09:16:51 -0400 (0:00:00.072) 0:03:45.591 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135\nFriday 22 June 2018 09:16:51 -0400 (0:00:00.040) 0:03:45.632 *********** \nok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nFriday 22 June 2018 09:16:51 -0400 (0:00:00.073) 0:03:45.705 *********** \nok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nFriday 22 June 2018 09:16:52 -0400 (0:00:00.071) 0:03:45.777 *********** \nok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nFriday 22 June 2018 09:16:52 -0400 (0:00:00.072) 0:03:45.849 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166\nFriday 22 June 2018 09:16:52 -0400 (0:00:00.052) 0:03:45.902 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175\nFriday 22 June 2018 09:16:52 -0400 (0:00:00.047) 0:03:45.949 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183\nFriday 22 June 2018 09:16:52 -0400 (0:00:00.045) 0:03:45.995 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nFriday 22 June 2018 09:16:52 -0400 (0:00:00.043) 0:03:46.038 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nFriday 22 June 2018 09:16:52 -0400 (0:00:00.041) 0:03:46.080 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nFriday 22 June 2018 09:16:52 -0400 (0:00:00.042) 0:03:46.122 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nFriday 22 June 2018 09:16:52 -0400 (0:00:00.053) 0:03:46.176 *********** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nFriday 22 June 2018 09:16:52 -0400 (0:00:00.070) 0:03:46.246 *********** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nFriday 22 June 2018 09:16:52 -0400 (0:00:00.067) 0:03:46.314 *********** \nchanged: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [compute-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nFriday 22 June 2018 09:16:57 -0400 (0:00:05.303) 0:03:51.617 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nFriday 22 June 2018 09:16:57 -0400 (0:00:00.046) 0:03:51.664 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nFriday 22 June 2018 09:16:57 -0400 (0:00:00.046) 0:03:51.710 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nFriday 22 June 2018 09:16:57 -0400 (0:00:00.045) 0:03:51.756 *********** \nok: [compute-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [compute-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nFriday 22 June 2018 09:16:58 -0400 (0:00:00.979) 0:03:52.735 *********** \nok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nFriday 22 June 2018 09:16:59 -0400 (0:00:00.074) 0:03:52.810 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nFriday 22 June 2018 09:16:59 -0400 (0:00:00.041) 0:03:52.852 *********** \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.026819\", \"end\": \"2018-06-22 13:16:59.633587\", \"rc\": 0, \"start\": \"2018-06-22 13:16:59.606768\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nFriday 22 June 2018 09:16:59 -0400 (0:00:00.540) 0:03:53.392 *********** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nFriday 22 June 2018 09:16:59 -0400 (0:00:00.074) 0:03:53.467 *********** \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-compute-0\"], \"delta\": \"0:00:00.029369\", \"end\": \"2018-06-22 13:17:00.254322\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:17:00.224953\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.546) 0:03:54.014 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.049) 0:03:54.063 *********** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.054) 0:03:54.117 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.047) 0:03:54.164 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.051) 0:03:54.216 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.051) 0:03:54.268 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.057) 0:03:54.326 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.041) 0:03:54.368 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.042) 0:03:54.410 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.047) 0:03:54.458 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.047) 0:03:54.505 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.049) 0:03:54.555 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.054) 0:03:54.609 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.046) 0:03:54.656 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.044) 0:03:54.701 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nFriday 22 June 2018 09:17:00 -0400 (0:00:00.044) 0:03:54.745 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:54.790 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:54.834 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.051) 0:03:54.886 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.046) 0:03:54.933 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.045) 0:03:54.978 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.045) 0:03:55.024 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.043) 0:03:55.068 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.113 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.055) 0:03:55.168 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.045) 0:03:55.213 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.047) 0:03:55.261 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.306 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.350 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.394 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.054) 0:03:55.449 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.494 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.538 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.583 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.627 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nFriday 22 June 2018 09:17:01 -0400 (0:00:00.045) 0:03:55.673 *********** \nok: [compute-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:15.739816\", \"end\": \"2018-06-22 13:17:18.254373\", \"rc\": 0, \"start\": \"2018-06-22 13:17:02.514557\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Download complete\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nFriday 22 June 2018 09:17:18 -0400 (0:00:16.347) 0:04:12.020 *********** \nchanged: [compute-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.030015\", \"end\": \"2018-06-22 13:17:18.913537\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:17:18.883522\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/0179656c641f4722d6f09053970bc22370490068858f90ad211fc530e928d6a2/diff:/var/lib/docker/overlay2/4a0f358bb31bae2256894d8f9b3d953b4779cb17b2cb2fdef512883ca71f0180/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/98a887e6aeda44e154c4448e9ea3811e5375e5e3e3237140a13770dd3a4a0ea0/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/98a887e6aeda44e154c4448e9ea3811e5375e5e3e3237140a13770dd3a4a0ea0/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/98a887e6aeda44e154c4448e9ea3811e5375e5e3e3237140a13770dd3a4a0ea0/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/0179656c641f4722d6f09053970bc22370490068858f90ad211fc530e928d6a2/diff:/var/lib/docker/overlay2/4a0f358bb31bae2256894d8f9b3d953b4779cb17b2cb2fdef512883ca71f0180/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/98a887e6aeda44e154c4448e9ea3811e5375e5e3e3237140a13770dd3a4a0ea0/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/98a887e6aeda44e154c4448e9ea3811e5375e5e3e3237140a13770dd3a4a0ea0/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/98a887e6aeda44e154c4448e9ea3811e5375e5e3e3237140a13770dd3a4a0ea0/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nFriday 22 June 2018 09:17:18 -0400 (0:00:00.742) 0:04:12.763 *********** \nok: [compute-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nFriday 22 June 2018 09:17:19 -0400 (0:00:00.076) 0:04:12.840 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nFriday 22 June 2018 09:17:19 -0400 (0:00:00.044) 0:04:12.884 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nFriday 22 June 2018 09:17:19 -0400 (0:00:00.044) 0:04:12.929 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nFriday 22 June 2018 09:17:19 -0400 (0:00:00.045) 0:04:12.975 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nFriday 22 June 2018 09:17:19 -0400 (0:00:00.049) 0:04:13.024 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nFriday 22 June 2018 09:17:19 -0400 (0:00:00.044) 0:04:13.069 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nFriday 22 June 2018 09:17:19 -0400 (0:00:00.043) 0:04:13.112 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nFriday 22 June 2018 09:17:19 -0400 (0:00:00.045) 0:04:13.158 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nFriday 22 June 2018 09:17:19 -0400 (0:00:00.047) 0:04:13.206 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nFriday 22 June 2018 09:17:19 -0400 (0:00:00.043) 0:04:13.249 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nFriday 22 June 2018 09:17:19 -0400 (0:00:00.043) 0:04:13.293 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nFriday 22 June 2018 09:17:19 -0400 (0:00:00.051) 0:04:13.345 *********** \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.560640\", \"end\": \"2018-06-22 13:17:20.651789\", \"rc\": 0, \"start\": \"2018-06-22 13:17:20.091149\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nFriday 22 June 2018 09:17:20 -0400 (0:00:01.074) 0:04:14.419 *********** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nFriday 22 June 2018 09:17:20 -0400 (0:00:00.076) 0:04:14.495 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nFriday 22 June 2018 09:17:20 -0400 (0:00:00.052) 0:04:14.548 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nFriday 22 June 2018 09:17:20 -0400 (0:00:00.047) 0:04:14.595 *********** \nok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nFriday 22 June 2018 09:17:20 -0400 (0:00:00.076) 0:04:14.671 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nFriday 22 June 2018 09:17:20 -0400 (0:00:00.047) 0:04:14.719 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nFriday 22 June 2018 09:17:20 -0400 (0:00:00.047) 0:04:14.767 *********** \nchanged: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nFriday 22 June 2018 09:17:23 -0400 (0:00:02.220) 0:04:16.987 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nFriday 22 June 2018 09:17:23 -0400 (0:00:00.044) 0:04:17.032 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nFriday 22 June 2018 09:17:23 -0400 (0:00:00.044) 0:04:17.076 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nFriday 22 June 2018 09:17:23 -0400 (0:00:00.052) 0:04:17.129 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nFriday 22 June 2018 09:17:23 -0400 (0:00:00.042) 0:04:17.172 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nFriday 22 June 2018 09:17:23 -0400 (0:00:00.042) 0:04:17.214 *********** \nchanged: [compute-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nFriday 22 June 2018 09:17:23 -0400 (0:00:00.512) 0:04:17.727 *********** \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for compute-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for compute-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for compute-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for compute-0\nchanged: [compute-0] => {\"changed\": true, \"checksum\": \"eeef7a153f878e6b1077230106cfc6c53cc7d23e\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"7fe0e8e07ef9226787b767b021af3e3a\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 978, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673444.0-36547092737052/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nFriday 22 June 2018 09:17:26 -0400 (0:00:03.019) 0:04:20.746 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-client : copy ceph admin keyring when non containerized deployment] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml:2\nFriday 22 June 2018 09:17:27 -0400 (0:00:00.040) 0:04:20.787 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-client : set_fact keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:2\nFriday 22 June 2018 09:17:27 -0400 (0:00:00.047) 0:04:20.835 *********** \nok: [compute-0] => (item={u'mon_cap': u'allow r', u'name': u'client.openstack', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', u'osd_cap': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'}) => {\"ansible_facts\": {\"keys_tmp\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}]}, \"changed\": false, \"item\": {\"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r\", \"name\": \"client.openstack\", \"osd_cap\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}}\nok: [compute-0] => (item={u'mon_cap': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', u'mds_cap': u'allow *', u'name': u'client.manila', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', u'osd_cap': u'allow rw'}) => {\"ansible_facts\": {\"keys_tmp\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}]}, \"changed\": false, \"item\": {\"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mds_cap\": \"allow *\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"name\": \"client.manila\", \"osd_cap\": \"allow rw\"}}\nok: [compute-0] => (item={u'mon_cap': u'allow rw', u'name': u'client.radosgw', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', u'osd_cap': u'allow rwx'}) => {\"ansible_facts\": {\"keys_tmp\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false, \"item\": {\"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow rw\", \"name\": \"client.radosgw\", \"osd_cap\": \"allow rwx\"}}\n\nTASK [ceph-client : set_fact keys - override keys_tmp with keys] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:9\nFriday 22 June 2018 09:17:27 -0400 (0:00:00.205) 0:04:21.041 *********** \nok: [compute-0] => {\"ansible_facts\": {\"keys\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false}\n\nTASK [ceph-client : run a dummy container (sleep 300) from where we can create pool(s)/key(s)] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:15\nFriday 22 June 2018 09:17:27 -0400 (0:00:00.175) 0:04:21.217 *********** \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"-d\", \"-v\", \"/etc/ceph:/etc/ceph:z\", \"--name\", \"ceph-create-keys\", \"--entrypoint=sleep\", \"192.168.24.1:8787/rhceph:3-6\", \"300\"], \"delta\": \"0:00:00.294909\", \"end\": \"2018-06-22 13:17:28.351669\", \"rc\": 0, \"start\": \"2018-06-22 13:17:28.056760\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ffa2eb0b84f2da4c45ccee8015b7ee3089a1521a40090f5ec125c3078378e912\", \"stdout_lines\": [\"ffa2eb0b84f2da4c45ccee8015b7ee3089a1521a40090f5ec125c3078378e912\"]}\n\nTASK [ceph-client : set_fact delegated_node] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:30\nFriday 22 June 2018 09:17:28 -0400 (0:00:00.892) 0:04:22.110 *********** \nok: [compute-0] => {\"ansible_facts\": {\"delegated_node\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-client : set_fact condition_copy_admin_key] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:34\nFriday 22 June 2018 09:17:28 -0400 (0:00:00.165) 0:04:22.275 *********** \nok: [compute-0] => {\"ansible_facts\": {\"condition_copy_admin_key\": true}, \"changed\": false}\n\nTASK [ceph-client : set_fact docker_exec_cmd] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:38\nFriday 22 June 2018 09:17:28 -0400 (0:00:00.073) 0:04:22.349 *********** \nok: [compute-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0 \"}, \"changed\": false}\n\nTASK [ceph-client : create cephx key(s)] ***************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:44\nFriday 22 June 2018 09:17:28 -0400 (0:00:00.134) 0:04:22.484 *********** \nchanged: [compute-0 -> 192.168.24.8] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", 'mon': u\"'allow r'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', 'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph-authtool\", \"--create-keyring\", \"/etc/ceph/ceph.client.openstack.keyring\", \"--name\", \"client.openstack\", \"--add-key\", \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"--cap\", \"mds\", \"''\", \"--cap\", \"osd\", \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", \"--cap\", \"mgr\", \"'allow *'\", \"--cap\", \"mon\", \"'allow r'\"], \"delta\": \"0:00:00.145042\", \"end\": \"2018-06-22 13:17:29.408032\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-06-22 13:17:29.262990\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"creating /etc/ceph/ceph.client.openstack.keyring\\nadded entity client.openstack auth auth(auid = 18446744073709551615 key=AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA== with 0 caps)\", \"stdout_lines\": [\"creating /etc/ceph/ceph.client.openstack.keyring\", \"added entity client.openstack auth auth(auid = 18446744073709551615 key=AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA== with 0 caps)\"]}\nchanged: [compute-0 -> 192.168.24.8] => (item={'caps': {'mds': u\"'allow *'\", 'osd': u\"'allow rw'\", 'mon': u'\\'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"\\'', 'mgr': u\"'allow *'\"}, 'name': u'client.manila', 'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', 'mode': u'0600'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph-authtool\", \"--create-keyring\", \"/etc/ceph/ceph.client.manila.keyring\", \"--name\", \"client.manila\", \"--add-key\", \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"--cap\", \"mds\", \"'allow *'\", \"--cap\", \"osd\", \"'allow rw'\", \"--cap\", \"mgr\", \"'allow *'\", \"--cap\", \"mon\", \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\"], \"delta\": \"0:00:00.143312\", \"end\": \"2018-06-22 13:17:30.006629\", \"item\": {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-06-22 13:17:29.863317\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"creating /etc/ceph/ceph.client.manila.keyring\\nadded entity client.manila auth auth(auid = 18446744073709551615 key=AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw== with 0 caps)\", \"stdout_lines\": [\"creating /etc/ceph/ceph.client.manila.keyring\", \"added entity client.manila auth auth(auid = 18446744073709551615 key=AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw== with 0 caps)\"]}\nchanged: [compute-0 -> 192.168.24.8] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow rwx'\", 'mon': u\"'allow rw'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', 'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph-authtool\", \"--create-keyring\", \"/etc/ceph/ceph.client.radosgw.keyring\", \"--name\", \"client.radosgw\", \"--add-key\", \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"--cap\", \"mds\", \"''\", \"--cap\", \"osd\", \"'allow rwx'\", \"--cap\", \"mgr\", \"'allow *'\", \"--cap\", \"mon\", \"'allow rw'\"], \"delta\": \"0:00:00.150798\", \"end\": \"2018-06-22 13:17:30.609805\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-06-22 13:17:30.459007\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"creating /etc/ceph/ceph.client.radosgw.keyring\\nadded entity client.radosgw auth auth(auid = 18446744073709551615 key=AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw== with 0 caps)\", \"stdout_lines\": [\"creating /etc/ceph/ceph.client.radosgw.keyring\", \"added entity client.radosgw auth auth(auid = 18446744073709551615 key=AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw== with 0 caps)\"]}\n\nTASK [ceph-client : slurp client cephx key(s)] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:62\nFriday 22 June 2018 09:17:30 -0400 (0:00:01.912) 0:04:24.397 *********** \nok: [compute-0 -> 192.168.24.8] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", 'mon': u\"'allow r'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', 'name': u'client.openstack'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUIyTnlwYkFBQUFBQkFBUWxwbHJ0Vm5xbkp6ZGNhSGdUSnNPQT09CgljYXBzIG1kcyA9ICInJyIKCWNhcHMgbWdyID0gIidhbGxvdyAqJyIKCWNhcHMgbW9uID0gIidhbGxvdyByJyIKCWNhcHMgb3NkID0gIidhbGxvdyBjbGFzcy1yZWFkIG9iamVjdF9wcmVmaXggcmJkX2NoaWxkcmVuLCBhbGxvdyByd3ggcG9vbD12b2x1bWVzLCBhbGxvdyByd3ggcG9vbD1iYWNrdXBzLCBhbGxvdyByd3ggcG9vbD12bXMsIGFsbG93IHJ3eCBwb29sPWltYWdlcywgYWxsb3cgcnd4IHBvb2w9bWV0cmljcyciCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}\nok: [compute-0 -> 192.168.24.8] => (item={'caps': {'mds': u\"'allow *'\", 'osd': u\"'allow rw'\", 'mon': u'\\'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"\\'', 'mgr': u\"'allow *'\"}, 'name': u'client.manila', 'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', 'mode': u'0600'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUIyTnlwYkFBQUFBQkFBYXU3UmxhWkw1eXZMVjlGa01FblVWdz09CgljYXBzIG1kcyA9ICInYWxsb3cgKiciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgciwgYWxsb3cgY29tbWFuZCBcImF1dGggZGVsXCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGNhcHNcIiwgYWxsb3cgY29tbWFuZCBcImF1dGggZ2V0XCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGdldC1vci1jcmVhdGVcIiciCgljYXBzIG9zZCA9ICInYWxsb3cgcncnIgo=\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}\nok: [compute-0 -> 192.168.24.8] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow rwx'\", 'mon': u\"'allow rw'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', 'name': u'client.radosgw'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFCMk55cGJBQUFBQUJBQTJlVTBsYURJaUpHajU2TzMwS29JZHc9PQoJY2FwcyBtZHMgPSAiJyciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgcncnIgoJY2FwcyBvc2QgPSAiJ2FsbG93IHJ3eCciCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}\n\nTASK [ceph-client : list existing pool(s)] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:74\nFriday 22 June 2018 09:17:32 -0400 (0:00:01.377) 0:04:25.774 *********** \n\nTASK [ceph-client : create ceph pool(s)] ***************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:86\nFriday 22 June 2018 09:17:32 -0400 (0:00:00.042) 0:04:25.817 *********** \n\nTASK [ceph-client : kill a dummy container that created pool(s)/key(s)] ********\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:109\nFriday 22 June 2018 09:17:32 -0400 (0:00:00.042) 0:04:25.859 *********** \nok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"rm\", \"-f\", \"ceph-create-keys\"], \"delta\": \"0:00:00.135610\", \"end\": \"2018-06-22 13:17:32.748685\", \"rc\": 0, \"start\": \"2018-06-22 13:17:32.613075\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph-create-keys\", \"stdout_lines\": [\"ceph-create-keys\"]}\n\nTASK [ceph-client : get client cephx keys] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:116\nFriday 22 June 2018 09:17:32 -0400 (0:00:00.651) 0:04:26.511 *********** \nchanged: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUIyTnlwYkFBQUFBQkFBUWxwbHJ0Vm5xbkp6ZGNhSGdUSnNPQT09CgljYXBzIG1kcyA9ICInJyIKCWNhcHMgbWdyID0gIidhbGxvdyAqJyIKCWNhcHMgbW9uID0gIidhbGxvdyByJyIKCWNhcHMgb3NkID0gIidhbGxvdyBjbGFzcy1yZWFkIG9iamVjdF9wcmVmaXggcmJkX2NoaWxkcmVuLCBhbGxvdyByd3ggcG9vbD12b2x1bWVzLCBhbGxvdyByd3ggcG9vbD1iYWNrdXBzLCBhbGxvdyByd3ggcG9vbD12bXMsIGFsbG93IHJ3eCBwb29sPWltYWdlcywgYWxsb3cgcnd4IHBvb2w9bWV0cmljcyciCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.openstack.keyring', 'item': {'mode': u'0600', 'name': u'client.openstack', 'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', 'caps': {'mds': u\"''\", 'osd': u\"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", 'mgr': u\"'allow *'\", 'mon': u\"'allow r'\"}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.openstack.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"4d6e0bd376eba7986733f512ff8b09821ea74177\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUIyTnlwYkFBQUFBQkFBUWxwbHJ0Vm5xbkp6ZGNhSGdUSnNPQT09CgljYXBzIG1kcyA9ICInJyIKCWNhcHMgbWdyID0gIidhbGxvdyAqJyIKCWNhcHMgbW9uID0gIidhbGxvdyByJyIKCWNhcHMgb3NkID0gIidhbGxvdyBjbGFzcy1yZWFkIG9iamVjdF9wcmVmaXggcmJkX2NoaWxkcmVuLCBhbGxvdyByd3ggcG9vbD12b2x1bWVzLCBhbGxvdyByd3ggcG9vbD1iYWNrdXBzLCBhbGxvdyByd3ggcG9vbD12bXMsIGFsbG93IHJ3eCBwb29sPWltYWdlcywgYWxsb3cgcnd4IHBvb2w9bWV0cmljcyciCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.openstack.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}, \"md5sum\": \"2717ff4f690665b611bacab8236d6e50\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 307, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673452.83-90501969988221/source\", \"state\": \"file\", \"uid\": 167}\nchanged: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUIyTnlwYkFBQUFBQkFBYXU3UmxhWkw1eXZMVjlGa01FblVWdz09CgljYXBzIG1kcyA9ICInYWxsb3cgKiciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgciwgYWxsb3cgY29tbWFuZCBcImF1dGggZGVsXCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGNhcHNcIiwgYWxsb3cgY29tbWFuZCBcImF1dGggZ2V0XCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGdldC1vci1jcmVhdGVcIiciCgljYXBzIG9zZCA9ICInYWxsb3cgcncnIgo=', 'failed': False, u'source': u'/etc/ceph/ceph.client.manila.keyring', 'item': {'name': u'client.manila', 'mode': u'0600', 'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', 'caps': {'mds': u\"'allow *'\", 'osd': u\"'allow rw'\", 'mgr': u\"'allow *'\", 'mon': u'\\'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"\\''}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.manila.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"3526ba4ba9af42743214640b911c0f92e35ad076\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUIyTnlwYkFBQUFBQkFBYXU3UmxhWkw1eXZMVjlGa01FblVWdz09CgljYXBzIG1kcyA9ICInYWxsb3cgKiciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgciwgYWxsb3cgY29tbWFuZCBcImF1dGggZGVsXCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGNhcHNcIiwgYWxsb3cgY29tbWFuZCBcImF1dGggZ2V0XCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGdldC1vci1jcmVhdGVcIiciCgljYXBzIG9zZCA9ICInYWxsb3cgcncnIgo=\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.manila.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}, \"md5sum\": \"2acc0be8ca9bbd36db382a6bc3ce46bd\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 284, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673455.19-4454543819/source\", \"state\": \"file\", \"uid\": 167}\nchanged: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFCMk55cGJBQUFBQUJBQTJlVTBsYURJaUpHajU2TzMwS29JZHc9PQoJY2FwcyBtZHMgPSAiJyciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgcncnIgoJY2FwcyBvc2QgPSAiJ2FsbG93IHJ3eCciCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.radosgw.keyring', 'item': {'mode': u'0600', 'name': u'client.radosgw', 'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', 'caps': {'mds': u\"''\", 'osd': u\"'allow rwx'\", 'mgr': u\"'allow *'\", 'mon': u\"'allow rw'\"}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.radosgw.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"242621e5f01c2a1fde923935408055d2268888b1\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFCMk55cGJBQUFBQUJBQTJlVTBsYURJaUpHajU2TzMwS29JZHc9PQoJY2FwcyBtZHMgPSAiJyciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgcncnIgoJY2FwcyBvc2QgPSAiJ2FsbG93IHJ3eCciCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.radosgw.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}, \"md5sum\": \"1791e54f0adfcef256a26063d743e45d\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 157, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673457.5-101088153961914/source\", \"state\": \"file\", \"uid\": 167}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nFriday 22 June 2018 09:17:39 -0400 (0:00:07.141) 0:04:33.652 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nFriday 22 June 2018 09:17:39 -0400 (0:00:00.065) 0:04:33.718 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nFriday 22 June 2018 09:17:39 -0400 (0:00:00.041) 0:04:33.760 *********** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.075) 0:04:33.836 *********** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.076) 0:04:33.913 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.062) 0:04:33.975 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.062) 0:04:34.038 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.042) 0:04:34.080 *********** \nskipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.069) 0:04:34.150 *********** \nskipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.073) 0:04:34.224 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.064) 0:04:34.289 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.063) 0:04:34.353 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.040) 0:04:34.394 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.048) 0:04:34.442 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.047) 0:04:34.490 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.064) 0:04:34.555 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.065) 0:04:34.620 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.041) 0:04:34.661 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.046) 0:04:34.708 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nFriday 22 June 2018 09:17:40 -0400 (0:00:00.047) 0:04:34.755 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nFriday 22 June 2018 09:17:41 -0400 (0:00:00.063) 0:04:34.819 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nFriday 22 June 2018 09:17:41 -0400 (0:00:00.061) 0:04:34.881 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nFriday 22 June 2018 09:17:41 -0400 (0:00:00.040) 0:04:34.922 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nFriday 22 June 2018 09:17:41 -0400 (0:00:00.053) 0:04:34.975 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nFriday 22 June 2018 09:17:41 -0400 (0:00:00.048) 0:04:35.024 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nFriday 22 June 2018 09:17:41 -0400 (0:00:00.064) 0:04:35.088 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nFriday 22 June 2018 09:17:41 -0400 (0:00:00.062) 0:04:35.151 *********** \nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nFriday 22 June 2018 09:17:41 -0400 (0:00:00.042) 0:04:35.193 *********** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nFriday 22 June 2018 09:17:41 -0400 (0:00:00.074) 0:04:35.268 *********** \nskipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nFriday 22 June 2018 09:17:41 -0400 (0:00:00.071) 0:04:35.339 *********** \nok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [set ceph client install 'Complete'] **************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:324\nFriday 22 June 2018 09:17:41 -0400 (0:00:00.199) 0:04:35.538 *********** \nok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"end\": \"20180622091741Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nPLAY RECAP *********************************************************************\nceph-0 : ok=88 changed=18 unreachable=0 failed=0 \ncompute-0 : ok=57 changed=7 unreachable=0 failed=0 \ncontroller-0 : ok=119 changed=20 unreachable=0 failed=0 \n\n\nINSTALLER STATUS ***************************************************************\nInstall Ceph Monitor : Complete (0:01:08)\nInstall Ceph Manager : Complete (0:00:38)\nInstall Ceph OSD : Complete (0:01:45)\nInstall Ceph Client : Complete (0:00:54)\n\nFriday 22 June 2018 09:17:41 -0400 (0:00:00.156) 0:04:35.694 *********** \n=============================================================================== \nceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image -------- 17.16s\n/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179 ----\nceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image -------- 16.72s\n/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179 ----\nceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image -------- 16.35s\n/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179 ----\ngather and delegate facts ----------------------------------------------- 8.62s\n/usr/share/ceph-ansible/site-docker.yml.sample:29 -----------------------------\nceph-osd : prepare ceph containerized osd disk collocated --------------- 7.55s\n/usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:5 -------\nceph-osd : create openstack pool(s) ------------------------------------- 7.36s\n/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:21 ----------\nceph-client : get client cephx keys ------------------------------------- 7.14s\n/usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:116 -----\nceph-osd : assign application to pool(s) -------------------------------- 6.04s\n/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:41 ----------\nceph-osd : copy to other mons the openstack cephx key(s) ---------------- 5.41s\n/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:71 ----------\nceph-defaults : create ceph initial directories ------------------------- 5.36s\n/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 \nceph-defaults : create ceph initial directories ------------------------- 5.34s\n/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 \nceph-defaults : create ceph initial directories ------------------------- 5.30s\n/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 \nceph-defaults : create ceph initial directories ------------------------- 5.08s\n/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 \nceph-osd : list existing pool(s) ---------------------------------------- 4.31s\n/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:12 ----------\nceph-osd : create openstack cephx key(s) -------------------------------- 4.10s\n/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:50 ----------\nceph-config : generate ceph.conf configuration file --------------------- 3.32s\n/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84 -------------------\nceph-config : generate ceph.conf configuration file --------------------- 3.08s\n/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84 -------------------\nceph-config : generate ceph.conf configuration file --------------------- 3.02s\n/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84 -------------------\nceph-mon : push ceph files to the ansible server ------------------------ 2.89s\n/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2 -------\nceph-mgr : generate systemd unit file ----------------------------------- 2.88s\n/usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:2 ----", "stdout_lines": ["ansible-playbook 2.5.4", " config file = /usr/share/ceph-ansible/ansible.cfg", " configured module search path = [u'/usr/share/ceph-ansible/library']", " ansible python module location = /usr/lib/python2.7/site-packages/ansible", " executable location = /usr/bin/ansible-playbook", " python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]", "Using /usr/share/ceph-ansible/ansible.cfg as config file", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml", "", "PLAYBOOK: site-docker.yml.sample ***********************************************", "12 plays in /usr/share/ceph-ansible/site-docker.yml.sample", "", "PLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,mgrs] ***", "", "TASK [gather facts] ************************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:24", "Friday 22 June 2018 09:13:06 -0400 (0:00:00.191) 0:00:00.191 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [gather and delegate facts] ***********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:29", "Friday 22 June 2018 09:13:06 -0400 (0:00:00.075) 0:00:00.267 *********** ", "ok: [controller-0 -> 192.168.24.15] => (item=compute-0)", "ok: [controller-0 -> 192.168.24.8] => (item=controller-0)", "ok: [controller-0 -> 192.168.24.10] => (item=ceph-0)", "", "TASK [check if it is atomic host] **********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:38", "Friday 22 June 2018 09:13:15 -0400 (0:00:08.624) 0:00:08.892 *********** ", "ok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [set_fact is_atomic] ******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:45", "Friday 22 June 2018 09:13:15 -0400 (0:00:00.745) 0:00:09.638 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "TASK [pull rhceph image] *******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:66", "Friday 22 June 2018 09:13:16 -0400 (0:00:00.237) 0:00:09.876 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [set ceph monitor install 'In Progress'] **********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:76", "Friday 22 June 2018 09:13:16 -0400 (0:00:00.103) 0:00:09.979 *********** ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"start\": \"20180622091316Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Friday 22 June 2018 09:13:16 -0400 (0:00:00.243) 0:00:10.223 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.030480\", \"end\": \"2018-06-22 13:13:17.212568\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:13:17.182088\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Friday 22 June 2018 09:13:17 -0400 (0:00:00.753) 0:00:10.976 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Friday 22 June 2018 09:13:17 -0400 (0:00:00.050) 0:00:11.027 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Friday 22 June 2018 09:13:17 -0400 (0:00:00.046) 0:00:11.074 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Friday 22 June 2018 09:13:17 -0400 (0:00:00.046) 0:00:11.120 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.030906\", \"end\": \"2018-06-22 13:13:17.928958\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:13:17.898052\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Friday 22 June 2018 09:13:17 -0400 (0:00:00.574) 0:00:11.695 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Friday 22 June 2018 09:13:17 -0400 (0:00:00.049) 0:00:11.744 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.046) 0:00:11.791 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.046) 0:00:11.838 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.048) 0:00:11.887 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.047) 0:00:11.934 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.047) 0:00:11.981 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.044) 0:00:12.026 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.045) 0:00:12.072 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.045) 0:00:12.117 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.052) 0:00:12.170 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.045) 0:00:12.216 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.047) 0:00:12.263 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.047) 0:00:12.310 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.047) 0:00:12.357 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.047) 0:00:12.405 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.048) 0:00:12.453 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.047) 0:00:12.501 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.046) 0:00:12.547 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.044) 0:00:12.592 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.045) 0:00:12.637 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.053) 0:00:12.690 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Friday 22 June 2018 09:13:18 -0400 (0:00:00.050) 0:00:12.741 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Friday 22 June 2018 09:13:19 -0400 (0:00:00.046) 0:00:12.787 *********** ", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Friday 22 June 2018 09:13:19 -0400 (0:00:00.532) 0:00:13.320 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Friday 22 June 2018 09:13:19 -0400 (0:00:00.079) 0:00:13.399 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Friday 22 June 2018 09:13:19 -0400 (0:00:00.075) 0:00:13.474 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Friday 22 June 2018 09:13:19 -0400 (0:00:00.070) 0:00:13.544 *********** ", "ok: [controller-0 -> 192.168.24.8] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Friday 22 June 2018 09:13:19 -0400 (0:00:00.144) 0:00:13.689 *********** ", "ok: [controller-0 -> 192.168.24.8] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.030274\", \"end\": \"2018-06-22 13:13:20.489487\", \"failed_when_result\": false, \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-06-22 13:13:20.459213\", \"stderr\": \"Error response from daemon: No such container: ceph-mon-controller-0\", \"stderr_lines\": [\"Error response from daemon: No such container: ceph-mon-controller-0\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check if /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Friday 22 June 2018 09:13:20 -0400 (0:00:00.569) 0:00:14.259 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Friday 22 June 2018 09:13:20 -0400 (0:00:00.192) 0:00:14.451 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Friday 22 June 2018 09:13:20 -0400 (0:00:00.048) 0:00:14.499 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Friday 22 June 2018 09:13:21 -0400 (0:00:00.428) 0:00:14.927 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Friday 22 June 2018 09:13:21 -0400 (0:00:00.044) 0:00:14.971 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85", "Friday 22 June 2018 09:13:21 -0400 (0:00:00.071) 0:00:15.043 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96", "Friday 22 June 2018 09:13:21 -0400 (0:00:00.043) 0:00:15.087 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105", "Friday 22 June 2018 09:13:21 -0400 (0:00:00.049) 0:00:15.136 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117", "Friday 22 June 2018 09:13:21 -0400 (0:00:00.041) 0:00:15.178 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123", "Friday 22 June 2018 09:13:21 -0400 (0:00:00.040) 0:00:15.218 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129", "Friday 22 June 2018 09:13:21 -0400 (0:00:00.071) 0:00:15.290 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135", "Friday 22 June 2018 09:13:21 -0400 (0:00:00.042) 0:00:15.332 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Friday 22 June 2018 09:13:21 -0400 (0:00:00.175) 0:00:15.508 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Friday 22 June 2018 09:13:21 -0400 (0:00:00.174) 0:00:15.683 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Friday 22 June 2018 09:13:22 -0400 (0:00:00.184) 0:00:15.868 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166", "Friday 22 June 2018 09:13:22 -0400 (0:00:00.051) 0:00:15.919 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175", "Friday 22 June 2018 09:13:22 -0400 (0:00:00.051) 0:00:15.970 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183", "Friday 22 June 2018 09:13:22 -0400 (0:00:00.052) 0:00:16.023 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Friday 22 June 2018 09:13:22 -0400 (0:00:00.044) 0:00:16.068 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Friday 22 June 2018 09:13:22 -0400 (0:00:00.044) 0:00:16.112 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Friday 22 June 2018 09:13:22 -0400 (0:00:00.044) 0:00:16.156 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Friday 22 June 2018 09:13:22 -0400 (0:00:00.045) 0:00:16.202 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Friday 22 June 2018 09:13:22 -0400 (0:00:00.167) 0:00:16.370 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Friday 22 June 2018 09:13:22 -0400 (0:00:00.175) 0:00:16.545 *********** ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Friday 22 June 2018 09:13:28 -0400 (0:00:05.340) 0:00:21.885 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Friday 22 June 2018 09:13:28 -0400 (0:00:00.046) 0:00:21.932 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Friday 22 June 2018 09:13:28 -0400 (0:00:00.055) 0:00:21.988 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Friday 22 June 2018 09:13:28 -0400 (0:00:00.050) 0:00:22.038 *********** ", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Friday 22 June 2018 09:13:29 -0400 (0:00:00.937) 0:00:22.976 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Friday 22 June 2018 09:13:29 -0400 (0:00:00.073) 0:00:23.050 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Friday 22 June 2018 09:13:29 -0400 (0:00:00.044) 0:00:23.094 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.024983\", \"end\": \"2018-06-22 13:13:29.839176\", \"rc\": 0, \"start\": \"2018-06-22 13:13:29.814193\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Friday 22 June 2018 09:13:29 -0400 (0:00:00.506) 0:00:23.600 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Friday 22 June 2018 09:13:29 -0400 (0:00:00.068) 0:00:23.669 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.027717\", \"end\": \"2018-06-22 13:13:30.426991\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:13:30.399274\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Friday 22 June 2018 09:13:30 -0400 (0:00:00.521) 0:00:24.190 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Friday 22 June 2018 09:13:30 -0400 (0:00:00.085) 0:00:24.276 *********** ", "ok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Friday 22 June 2018 09:13:30 -0400 (0:00:00.122) 0:00:24.398 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Friday 22 June 2018 09:13:30 -0400 (0:00:00.084) 0:00:24.483 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Friday 22 June 2018 09:13:30 -0400 (0:00:00.088) 0:00:24.571 *********** ", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Friday 22 June 2018 09:13:32 -0400 (0:00:01.242) 0:00:25.814 *********** ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.266) 0:00:26.081 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.043) 0:00:26.124 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.044) 0:00:26.169 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.050) 0:00:26.220 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.048) 0:00:26.268 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.047) 0:00:26.316 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.045) 0:00:26.361 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.053) 0:00:26.414 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.043) 0:00:26.458 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.049) 0:00:26.507 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.042) 0:00:26.549 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.041) 0:00:26.591 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.043) 0:00:26.634 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.047) 0:00:26.681 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.045) 0:00:26.727 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Friday 22 June 2018 09:13:32 -0400 (0:00:00.042) 0:00:26.769 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Friday 22 June 2018 09:13:33 -0400 (0:00:00.046) 0:00:26.816 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Friday 22 June 2018 09:13:33 -0400 (0:00:00.046) 0:00:26.862 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Friday 22 June 2018 09:13:33 -0400 (0:00:00.052) 0:00:26.914 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Friday 22 June 2018 09:13:33 -0400 (0:00:00.043) 0:00:26.958 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Friday 22 June 2018 09:13:33 -0400 (0:00:00.046) 0:00:27.004 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Friday 22 June 2018 09:13:33 -0400 (0:00:00.041) 0:00:27.046 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Friday 22 June 2018 09:13:33 -0400 (0:00:00.040) 0:00:27.087 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Friday 22 June 2018 09:13:33 -0400 (0:00:00.046) 0:00:27.134 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Friday 22 June 2018 09:13:33 -0400 (0:00:00.042) 0:00:27.176 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Friday 22 June 2018 09:13:33 -0400 (0:00:00.045) 0:00:27.222 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Friday 22 June 2018 09:13:33 -0400 (0:00:00.042) 0:00:27.264 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Friday 22 June 2018 09:13:33 -0400 (0:00:00.045) 0:00:27.309 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Friday 22 June 2018 09:13:33 -0400 (0:00:00.043) 0:00:27.352 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Friday 22 June 2018 09:13:33 -0400 (0:00:00.051) 0:00:27.404 *********** ", "ok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:16.555002\", \"end\": \"2018-06-22 13:13:50.800180\", \"rc\": 0, \"start\": \"2018-06-22 13:13:34.245178\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Verifying Checksum\\nb8aa42cec17a: Download complete\\n9a32f102e677: Verifying Checksum\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Verifying Checksum\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Verifying Checksum\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Friday 22 June 2018 09:13:50 -0400 (0:00:17.161) 0:00:44.566 *********** ", "changed: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.029250\", \"end\": \"2018-06-22 13:13:51.426584\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:13:51.397334\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Friday 22 June 2018 09:13:51 -0400 (0:00:00.629) 0:00:45.196 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Friday 22 June 2018 09:13:51 -0400 (0:00:00.183) 0:00:45.379 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Friday 22 June 2018 09:13:51 -0400 (0:00:00.049) 0:00:45.428 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Friday 22 June 2018 09:13:51 -0400 (0:00:00.042) 0:00:45.471 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Friday 22 June 2018 09:13:51 -0400 (0:00:00.041) 0:00:45.512 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Friday 22 June 2018 09:13:51 -0400 (0:00:00.046) 0:00:45.558 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Friday 22 June 2018 09:13:51 -0400 (0:00:00.049) 0:00:45.608 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Friday 22 June 2018 09:13:51 -0400 (0:00:00.143) 0:00:45.751 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Friday 22 June 2018 09:13:52 -0400 (0:00:00.046) 0:00:45.798 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Friday 22 June 2018 09:13:52 -0400 (0:00:00.048) 0:00:45.846 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Friday 22 June 2018 09:13:52 -0400 (0:00:00.046) 0:00:45.892 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Friday 22 June 2018 09:13:52 -0400 (0:00:00.042) 0:00:45.935 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Friday 22 June 2018 09:13:52 -0400 (0:00:00.050) 0:00:45.985 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.560244\", \"end\": \"2018-06-22 13:13:53.270265\", \"rc\": 0, \"start\": \"2018-06-22 13:13:52.710021\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Friday 22 June 2018 09:13:53 -0400 (0:00:01.046) 0:00:47.032 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Friday 22 June 2018 09:13:53 -0400 (0:00:00.071) 0:00:47.103 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Friday 22 June 2018 09:13:53 -0400 (0:00:00.049) 0:00:47.153 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Friday 22 June 2018 09:13:53 -0400 (0:00:00.045) 0:00:47.199 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Friday 22 June 2018 09:13:53 -0400 (0:00:00.070) 0:00:47.269 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Friday 22 June 2018 09:13:53 -0400 (0:00:00.045) 0:00:47.315 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Friday 22 June 2018 09:13:53 -0400 (0:00:00.046) 0:00:47.361 *********** ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Friday 22 June 2018 09:13:55 -0400 (0:00:02.180) 0:00:49.542 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Friday 22 June 2018 09:13:55 -0400 (0:00:00.048) 0:00:49.590 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Friday 22 June 2018 09:13:55 -0400 (0:00:00.048) 0:00:49.639 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Friday 22 June 2018 09:13:56 -0400 (0:00:00.215) 0:00:49.854 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Friday 22 June 2018 09:13:56 -0400 (0:00:00.050) 0:00:49.905 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Friday 22 June 2018 09:13:56 -0400 (0:00:00.047) 0:00:49.953 *********** ", "changed: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Friday 22 June 2018 09:13:56 -0400 (0:00:00.487) 0:00:50.440 *********** ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for controller-0", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"8376233e5a1bc87f2c4fab91f94a8b75f6c6a2f6\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"0f740ab4fb6329f001a8e004a4e1d994\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 761, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673236.71-134691013098495/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Friday 22 June 2018 09:13:59 -0400 (0:00:03.324) 0:00:53.765 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact docker_exec_cmd] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:2", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.042) 0:00:53.808 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-mon : make sure monitor_interface or monitor_address or monitor_address_block is configured] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:2", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.069) 0:00:53.877 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : generate monitor initial keyring] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:2", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.052) 0:00:53.929 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : read monitor initial keyring if it already exists] ************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:11", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.044) 0:00:53.973 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create monitor initial keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.049) 0:00:54.023 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set initial monitor key permissions] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:34", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.042) 0:00:54.065 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create (and fix ownership of) monitor directory] **************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:42", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.044) 0:00:54.109 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact client_admin_ceph_authtool_cap >= ceph_release_num.luminous] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:51", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.044) 0:00:54.154 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact client_admin_ceph_authtool_cap < ceph_release_num.luminous] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:63", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.197 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create custom admin keyring] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:74", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.241 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set ownership of admin keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:88", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.042) 0:00:54.284 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : import admin keyring into mon keyring] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:99", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.327 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ceph monitor mkfs with keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:106", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.044) 0:00:54.371 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ceph monitor mkfs without keyring] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:113", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.415 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ensure systemd service override directory exists] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:2", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.042) 0:00:54.458 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : add ceph-mon systemd service overrides] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:10", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.052) 0:00:54.510 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : start the monitor service] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:20", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.554 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : enable the ceph-mon.target service] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:29", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.598 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : include ceph_keys.yml] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:19", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.641 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : collect all the pools] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:2", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.043) 0:00:54.684 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : secure the cluster] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:7", "Friday 22 June 2018 09:14:00 -0400 (0:00:00.041) 0:00:54.726 *********** ", "", "TASK [ceph-mon : set_fact ceph_config_keys] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:2", "Friday 22 June 2018 09:14:01 -0400 (0:00:00.046) 0:00:54.773 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : register rbd bootstrap key] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:11", "Friday 22 June 2018 09:14:01 -0400 (0:00:00.074) 0:00:54.848 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"bootstrap_rbd_keyring\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : merge rbd bootstrap key to config and keys paths] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:17", "Friday 22 June 2018 09:14:01 -0400 (0:00:00.070) 0:00:54.918 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : stat for ceph config and keys] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:22", "Friday 22 June 2018 09:14:01 -0400 (0:00:00.075) 0:00:54.994 *********** ", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}", "", "TASK [ceph-mon : try to copy ceph keys] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:33", "Friday 22 June 2018 09:14:02 -0400 (0:00:00.854) 0:00:55.848 *********** ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : populate kv_store with default ceph.conf] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:2", "Friday 22 June 2018 09:14:02 -0400 (0:00:00.121) 0:00:55.969 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : populate kv_store with custom ceph.conf] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:18", "Friday 22 June 2018 09:14:02 -0400 (0:00:00.047) 0:00:56.017 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : delete populate-kv-store docker] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:36", "Friday 22 June 2018 09:14:02 -0400 (0:00:00.062) 0:00:56.080 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:43", "Friday 22 June 2018 09:14:02 -0400 (0:00:00.045) 0:00:56.126 *********** ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"c295bd0e2b9ac132014f0c7ae2b5171a5053fe0b\", \"dest\": \"/etc/systemd/system/ceph-mon@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"ad5a25ce16b55be4b0d5e4bf757255da\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 835, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673242.39-172271329724894/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mon : systemd start mon container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:54", "Friday 22 June 2018 09:14:05 -0400 (0:00:02.778) 0:00:58.904 *********** ", "ok: [controller-0] => {\"changed\": false, \"enabled\": true, \"name\": \"ceph-mon@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"basic.target system-ceph\\\\x5cx2dmon.slice docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Monitor\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --name ceph-mon-%i --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro --net=host -e IP_VERSION=4 -e MON_IP=172.17.3.18 -e CLUSTER=ceph -e FSID=53912472-747b-11e8-95a3-5254003d7dcb -e CEPH_PUBLIC_NETWORK=172.17.3.0/24 -e CEPH_DAEMON=MON 192.168.24.1:8787/rhceph:3-6 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mon@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mon@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127793\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127793\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mon@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmon.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmon.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-mon : configure ceph profile.d aliases] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml:2", "Friday 22 June 2018 09:14:06 -0400 (0:00:00.895) 0:00:59.800 *********** ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"78965c7dfcde4827c1cb8645bc7a444472e87718\", \"dest\": \"/etc/profile.d/ceph-aliases.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"66a9bfe5c26a22ade3c67cc7c7a58d2c\", \"mode\": \"0755\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:bin_t:s0\", \"size\": 375, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673246.18-258778462067608/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mon : wait for monitor socket to exist] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:12", "Friday 22 June 2018 09:14:08 -0400 (0:00:02.628) 0:01:02.429 *********** ", "changed: [controller-0] => {\"attempts\": 1, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"sh\", \"-c\", \"stat /var/run/ceph/ceph-mon.controller-0.asok || stat /var/run/ceph/ceph-mon.controller-0.localdomain.asok\"], \"delta\": \"0:00:00.083078\", \"end\": \"2018-06-22 13:14:09.344607\", \"rc\": 0, \"start\": \"2018-06-22 13:14:09.261529\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: '/var/run/ceph/ceph-mon.controller-0.asok'\\n Size: 0 \\tBlocks: 0 IO Block: 4096 socket\\nDevice: 14h/20d\\tInode: 371425 Links: 1\\nAccess: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\\nAccess: 2018-06-22 13:14:07.048930719 +0000\\nModify: 2018-06-22 13:14:07.048930719 +0000\\nChange: 2018-06-22 13:14:07.048930719 +0000\\n Birth: -\", \"stdout_lines\": [\" File: '/var/run/ceph/ceph-mon.controller-0.asok'\", \" Size: 0 \\tBlocks: 0 IO Block: 4096 socket\", \"Device: 14h/20d\\tInode: 371425 Links: 1\", \"Access: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\", \"Access: 2018-06-22 13:14:07.048930719 +0000\", \"Modify: 2018-06-22 13:14:07.048930719 +0000\", \"Change: 2018-06-22 13:14:07.048930719 +0000\", \" Birth: -\"]}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:19", "Friday 22 June 2018 09:14:09 -0400 (0:00:00.680) 0:01:03.110 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:29", "Friday 22 June 2018 09:14:09 -0400 (0:00:00.093) 0:01:03.203 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:39", "Friday 22 June 2018 09:14:09 -0400 (0:00:00.087) 0:01:03.291 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--admin-daemon\", \"/var/run/ceph/ceph-mon.controller-0.asok\", \"add_bootstrap_peer_hint\", \"172.17.3.18\"], \"delta\": \"0:00:00.186992\", \"end\": \"2018-06-22 13:14:10.515708\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:14:10.328716\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"mon already active; ignoring bootstrap hint\", \"stdout_lines\": [\"mon already active; ignoring bootstrap hint\"]}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:49", "Friday 22 June 2018 09:14:10 -0400 (0:00:00.986) 0:01:04.278 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:59", "Friday 22 June 2018 09:14:10 -0400 (0:00:00.048) 0:01:04.327 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:69", "Friday 22 June 2018 09:14:10 -0400 (0:00:00.223) 0:01:04.550 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : push ceph files to the ansible server] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2", "Friday 22 June 2018 09:14:10 -0400 (0:00:00.048) 0:01:04.598 *********** ", "changed: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"793b49d83f132a70fc67d6c0569cfa8c71650741\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.client.admin.keyring\", \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"edc649fc880af546c25f69c696fca0fe\", \"remote_checksum\": \"793b49d83f132a70fc67d6c0569cfa8c71650741\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"dae692cfee0fa0a32ffaad10f7d24e310a009db9\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.mon.keyring\", \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"45da627f7c55925963e129ae734f2d5e\", \"remote_checksum\": \"dae692cfee0fa0a32ffaad10f7d24e310a009db9\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"d8a7f9eb9d9dc0395da75fc7759797ea97e335aa\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"5208039d17edb4ccda0d9023c061854b\", \"remote_checksum\": \"d8a7f9eb9d9dc0395da75fc7759797ea97e335aa\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"9613a61f8c01ce2de5a65853e6a5574e32ab15c0\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"9e6c050c69d1e668638ae983ad165248\", \"remote_checksum\": \"9613a61f8c01ce2de5a65853e6a5574e32ab15c0\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"11de432a77f2de2b2705ea5780f568345ba62116\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"782622eddeeebdfdb6434bdb74e33313\", \"remote_checksum\": \"11de432a77f2de2b2705ea5780f568345ba62116\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": true, \"checksum\": \"fa627b4b6c0e4d6b86f16984405cd43c6dd3021c\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"42c481a7f7e4ffbdc34aade7c3965f84\", \"remote_checksum\": \"fa627b4b6c0e4d6b86f16984405cd43c6dd3021c\", \"remote_md5sum\": null}", "", "TASK [ceph-mon : create ceph rest api keyring when mon is containerized] *******", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:84", "Friday 22 June 2018 09:14:13 -0400 (0:00:02.887) 0:01:07.486 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create ceph mgr keyring(s) when mon is containerized] *********", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:97", "Friday 22 June 2018 09:14:13 -0400 (0:00:00.046) 0:01:07.532 *********** ", "ok: [controller-0] => (item=controller-0) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"get-or-create\", \"mgr.controller-0\", \"mon\", \"allow profile mgr\", \"osd\", \"allow *\", \"mds\", \"allow *\", \"-o\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"], \"delta\": \"0:00:00.342343\", \"end\": \"2018-06-22 13:14:14.722678\", \"item\": \"controller-0\", \"rc\": 0, \"start\": \"2018-06-22 13:14:14.380335\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-mon : stat for ceph mgr key(s)] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:109", "Friday 22 June 2018 09:14:14 -0400 (0:00:00.952) 0:01:08.485 *********** ", "ok: [controller-0] => (item=controller-0) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"controller-0\", \"stat\": {\"atime\": 1529673254.5999415, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"f1eb3e81a4f49f68787b67580eb8b9601f3e1e36\", \"ctime\": 1529673254.7039416, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 69329262, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1529673254.7039416, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"18446744073449758241\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "", "TASK [ceph-mon : fetch ceph mgr key(s)] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:121", "Friday 22 June 2018 09:14:15 -0400 (0:00:00.625) 0:01:09.111 *********** ", "changed: [controller-0] => (item={'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 0, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673254.7039416, u'block_size': 4096, u'inode': 69329262, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'charset': u'us-ascii', u'readable': True, u'version': u'18446744073449758241', u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'root', u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1529673254.5999415, u'mimetype': u'text/plain', u'ctime': 1529673254.7039416, u'isblk': False, u'xgrp': False, u'dev': 64514, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'checksum': u'f1eb3e81a4f49f68787b67580eb8b9601f3e1e36', u'islnk': False, u'attributes': []}, u'changed': False, '_ansible_no_log': False, 'item': u'controller-0', '_ansible_item_result': True, 'failed': False, u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"f1eb3e81a4f49f68787b67580eb8b9601f3e1e36\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.mgr.controller-0.keyring\", \"item\": {\"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"controller-0\", \"stat\": {\"atime\": 1529673254.5999415, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"f1eb3e81a4f49f68787b67580eb8b9601f3e1e36\", \"ctime\": 1529673254.7039416, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 69329262, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1529673254.7039416, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"18446744073449758241\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}, \"md5sum\": \"27b1ed102ad44a0a24aa2cc10f78f0d3\", \"remote_checksum\": \"f1eb3e81a4f49f68787b67580eb8b9601f3e1e36\", \"remote_md5sum\": null}", "", "TASK [ceph-mon : configure crush hierarchy] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:2", "Friday 22 June 2018 09:14:15 -0400 (0:00:00.579) 0:01:09.690 *********** ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create configured crush rules] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:14", "Friday 22 June 2018 09:14:15 -0400 (0:00:00.049) 0:01:09.739 *********** ", "skipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : get id for new default crush rule] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:21", "Friday 22 June 2018 09:14:16 -0400 (0:00:00.053) 0:01:09.793 *********** ", "skipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact info_ceph_default_crush_rule_yaml] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:33", "Friday 22 June 2018 09:14:16 -0400 (0:00:00.054) 0:01:09.847 *********** ", "skipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}, 'changed': False, '_ansible_ignore_errors': None}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}, 'changed': False, '_ansible_ignore_errors': None}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_crush_rule to osd_pool_default_crush_replicated_ruleset if release < luminous else osd_pool_default_crush_rule] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:41", "Friday 22 June 2018 09:14:16 -0400 (0:00:00.056) 0:01:09.903 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : insert new default crush rule into daemon to prevent restart] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:45", "Friday 22 June 2018 09:14:16 -0400 (0:00:00.069) 0:01:09.973 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : add new default crush rule to ceph.conf] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:54", "Friday 22 June 2018 09:14:16 -0400 (0:00:00.072) 0:01:10.046 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : get default value for osd_pool_default_pg_num] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:5", "Friday 22 June 2018 09:14:16 -0400 (0:00:00.048) 0:01:10.095 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num with pool_default_pg_num (backward compatibility)] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:16", "Friday 22 June 2018 09:14:16 -0400 (0:00:00.050) 0:01:10.145 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num with default_pool_default_pg_num.stdout] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:21", "Friday 22 June 2018 09:14:16 -0400 (0:00:00.042) 0:01:10.188 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num ceph_conf_overrides.global.osd_pool_default_pg_num] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:27", "Friday 22 June 2018 09:14:16 -0400 (0:00:00.045) 0:01:10.233 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"osd_pool_default_pg_num\": \"32\"}, \"changed\": false}", "", "TASK [ceph-mon : increase calamari logging level when debug is on] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:9", "Friday 22 June 2018 09:14:16 -0400 (0:00:00.070) 0:01:10.303 *********** ", "skipping: [controller-0] => (item=cthulhu) => {\"changed\": false, \"item\": \"cthulhu\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=calamari_web) => {\"changed\": false, \"item\": \"calamari_web\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : initialize the calamari server api] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:20", "Friday 22 June 2018 09:14:16 -0400 (0:00:00.047) 0:01:10.351 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Friday 22 June 2018 09:14:16 -0400 (0:00:00.014) 0:01:10.365 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Friday 22 June 2018 09:14:16 -0400 (0:00:00.065) 0:01:10.431 *********** ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"a16eea5d614de2b10079cb91a04686e919ccc201\", \"dest\": \"/tmp/restart_mon_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"b59e1abae52d61eb05b9ff080771a551\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 1173, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673256.72-84454899936950/source\", \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Friday 22 June 2018 09:14:19 -0400 (0:00:02.523) 0:01:12.954 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Friday 22 June 2018 09:14:19 -0400 (0:00:00.083) 0:01:13.038 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Friday 22 June 2018 09:14:19 -0400 (0:00:00.118) 0:01:13.156 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Friday 22 June 2018 09:14:19 -0400 (0:00:00.066) 0:01:13.223 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Friday 22 June 2018 09:14:19 -0400 (0:00:00.067) 0:01:13.290 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Friday 22 June 2018 09:14:19 -0400 (0:00:00.044) 0:01:13.334 *********** ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Friday 22 June 2018 09:14:19 -0400 (0:00:00.072) 0:01:13.407 *********** ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Friday 22 June 2018 09:14:19 -0400 (0:00:00.078) 0:01:13.485 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Friday 22 June 2018 09:14:19 -0400 (0:00:00.066) 0:01:13.552 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Friday 22 June 2018 09:14:19 -0400 (0:00:00.062) 0:01:13.614 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Friday 22 June 2018 09:14:19 -0400 (0:00:00.042) 0:01:13.656 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Friday 22 June 2018 09:14:19 -0400 (0:00:00.049) 0:01:13.706 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Friday 22 June 2018 09:14:19 -0400 (0:00:00.054) 0:01:13.760 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Friday 22 June 2018 09:14:20 -0400 (0:00:00.162) 0:01:13.923 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Friday 22 June 2018 09:14:20 -0400 (0:00:00.162) 0:01:14.086 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Friday 22 June 2018 09:14:20 -0400 (0:00:00.060) 0:01:14.146 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Friday 22 June 2018 09:14:20 -0400 (0:00:00.081) 0:01:14.227 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Friday 22 June 2018 09:14:20 -0400 (0:00:00.078) 0:01:14.306 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Friday 22 June 2018 09:14:20 -0400 (0:00:00.216) 0:01:14.522 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Friday 22 June 2018 09:14:20 -0400 (0:00:00.170) 0:01:14.693 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Friday 22 June 2018 09:14:20 -0400 (0:00:00.049) 0:01:14.742 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Friday 22 June 2018 09:14:21 -0400 (0:00:00.059) 0:01:14.802 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Friday 22 June 2018 09:14:21 -0400 (0:00:00.057) 0:01:14.859 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Friday 22 June 2018 09:14:21 -0400 (0:00:00.164) 0:01:15.024 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Friday 22 June 2018 09:14:21 -0400 (0:00:00.193) 0:01:15.217 *********** ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"f36b3460f6762a853a3dab1958afb7d83ff8f234\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"9d50588dc55f43284b00033b8b30edc3\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 570, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673261.64-37075549057491/source\", \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Friday 22 June 2018 09:14:24 -0400 (0:00:02.583) 0:01:17.801 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Friday 22 June 2018 09:14:24 -0400 (0:00:00.094) 0:01:17.895 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Friday 22 June 2018 09:14:24 -0400 (0:00:00.138) 0:01:18.033 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [set ceph monitor install 'Complete'] *************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:98", "Friday 22 June 2018 09:14:24 -0400 (0:00:00.112) 0:01:18.145 *********** ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"end\": \"20180622091424Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mgrs] ********************************************************************", "", "TASK [set ceph manager install 'In Progress'] **********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:110", "Friday 22 June 2018 09:14:24 -0400 (0:00:00.148) 0:01:18.294 *********** ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"start\": \"20180622091424Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Friday 22 June 2018 09:14:24 -0400 (0:00:00.081) 0:01:18.376 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.027560\", \"end\": \"2018-06-22 13:14:25.146567\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:14:25.119007\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"2d71e99d5d90\", \"stdout_lines\": [\"2d71e99d5d90\"]}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Friday 22 June 2018 09:14:25 -0400 (0:00:00.532) 0:01:18.908 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Friday 22 June 2018 09:14:25 -0400 (0:00:00.046) 0:01:18.955 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Friday 22 June 2018 09:14:25 -0400 (0:00:00.049) 0:01:19.004 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Friday 22 June 2018 09:14:25 -0400 (0:00:00.046) 0:01:19.051 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.028446\", \"end\": \"2018-06-22 13:14:25.815683\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:14:25.787237\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Friday 22 June 2018 09:14:25 -0400 (0:00:00.525) 0:01:19.576 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Friday 22 June 2018 09:14:25 -0400 (0:00:00.046) 0:01:19.623 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Friday 22 June 2018 09:14:25 -0400 (0:00:00.044) 0:01:19.667 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Friday 22 June 2018 09:14:25 -0400 (0:00:00.053) 0:01:19.720 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Friday 22 June 2018 09:14:25 -0400 (0:00:00.046) 0:01:19.766 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.046) 0:01:19.812 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.047) 0:01:19.860 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.046) 0:01:19.906 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.049) 0:01:19.955 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.046) 0:01:20.002 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.044) 0:01:20.047 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.042) 0:01:20.090 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.042) 0:01:20.132 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.046) 0:01:20.179 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.052) 0:01:20.231 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.045) 0:01:20.277 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.045) 0:01:20.322 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.045) 0:01:20.367 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.045) 0:01:20.412 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.045) 0:01:20.458 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.043) 0:01:20.501 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.044) 0:01:20.546 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.046) 0:01:20.592 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Friday 22 June 2018 09:14:26 -0400 (0:00:00.044) 0:01:20.637 *********** ", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Friday 22 June 2018 09:14:27 -0400 (0:00:00.500) 0:01:21.137 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Friday 22 June 2018 09:14:27 -0400 (0:00:00.069) 0:01:21.206 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Friday 22 June 2018 09:14:27 -0400 (0:00:00.070) 0:01:21.277 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Friday 22 June 2018 09:14:27 -0400 (0:00:00.066) 0:01:21.343 *********** ", "ok: [controller-0 -> 192.168.24.8] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Friday 22 June 2018 09:14:27 -0400 (0:00:00.134) 0:01:21.478 *********** ", "ok: [controller-0 -> 192.168.24.8] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.331332\", \"end\": \"2018-06-22 13:14:28.558606\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:14:28.227274\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"53912472-747b-11e8-95a3-5254003d7dcb\", \"stdout_lines\": [\"53912472-747b-11e8-95a3-5254003d7dcb\"]}", "", "TASK [ceph-defaults : check if /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Friday 22 June 2018 09:14:28 -0400 (0:00:00.848) 0:01:22.326 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Friday 22 June 2018 09:14:28 -0400 (0:00:00.184) 0:01:22.511 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Friday 22 June 2018 09:14:28 -0400 (0:00:00.050) 0:01:22.562 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 50, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Friday 22 June 2018 09:14:28 -0400 (0:00:00.185) 0:01:22.748 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"fsid\": \"53912472-747b-11e8-95a3-5254003d7dcb\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Friday 22 June 2018 09:14:29 -0400 (0:00:00.169) 0:01:22.917 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85", "Friday 22 June 2018 09:14:29 -0400 (0:00:00.245) 0:01:23.162 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96", "Friday 22 June 2018 09:14:29 -0400 (0:00:00.044) 0:01:23.207 *********** ", "changed: [controller-0 -> localhost] => {\"changed\": true, \"cmd\": \"echo 53912472-747b-11e8-95a3-5254003d7dcb | tee /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"delta\": \"0:00:00.005088\", \"end\": \"2018-06-22 09:14:29.578341\", \"rc\": 0, \"start\": \"2018-06-22 09:14:29.573253\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"53912472-747b-11e8-95a3-5254003d7dcb\", \"stdout_lines\": [\"53912472-747b-11e8-95a3-5254003d7dcb\"]}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105", "Friday 22 June 2018 09:14:29 -0400 (0:00:00.185) 0:01:23.392 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117", "Friday 22 June 2018 09:14:29 -0400 (0:00:00.041) 0:01:23.433 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123", "Friday 22 June 2018 09:14:29 -0400 (0:00:00.038) 0:01:23.471 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129", "Friday 22 June 2018 09:14:29 -0400 (0:00:00.074) 0:01:23.546 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135", "Friday 22 June 2018 09:14:29 -0400 (0:00:00.040) 0:01:23.587 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Friday 22 June 2018 09:14:29 -0400 (0:00:00.042) 0:01:23.629 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Friday 22 June 2018 09:14:29 -0400 (0:00:00.043) 0:01:23.672 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Friday 22 June 2018 09:14:29 -0400 (0:00:00.043) 0:01:23.716 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166", "Friday 22 June 2018 09:14:29 -0400 (0:00:00.046) 0:01:23.762 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175", "Friday 22 June 2018 09:14:30 -0400 (0:00:00.055) 0:01:23.818 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183", "Friday 22 June 2018 09:14:30 -0400 (0:00:00.045) 0:01:23.863 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Friday 22 June 2018 09:14:30 -0400 (0:00:00.043) 0:01:23.907 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Friday 22 June 2018 09:14:30 -0400 (0:00:00.047) 0:01:23.955 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Friday 22 June 2018 09:14:30 -0400 (0:00:00.044) 0:01:24.000 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Friday 22 June 2018 09:14:30 -0400 (0:00:00.049) 0:01:24.049 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Friday 22 June 2018 09:14:30 -0400 (0:00:00.073) 0:01:24.123 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Friday 22 June 2018 09:14:30 -0400 (0:00:00.070) 0:01:24.193 *********** ", "ok: [controller-0] => (item=/etc/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 160, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 28, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 35, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/run/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 60, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Friday 22 June 2018 09:14:35 -0400 (0:00:05.361) 0:01:29.554 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Friday 22 June 2018 09:14:35 -0400 (0:00:00.052) 0:01:29.607 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Friday 22 June 2018 09:14:35 -0400 (0:00:00.059) 0:01:29.666 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Friday 22 June 2018 09:14:35 -0400 (0:00:00.052) 0:01:29.718 *********** ", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Friday 22 June 2018 09:14:36 -0400 (0:00:00.937) 0:01:30.656 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Friday 22 June 2018 09:14:36 -0400 (0:00:00.075) 0:01:30.731 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Friday 22 June 2018 09:14:37 -0400 (0:00:00.045) 0:01:30.777 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.026729\", \"end\": \"2018-06-22 13:14:37.541626\", \"rc\": 0, \"start\": \"2018-06-22 13:14:37.514897\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Friday 22 June 2018 09:14:37 -0400 (0:00:00.521) 0:01:31.299 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Friday 22 June 2018 09:14:37 -0400 (0:00:00.069) 0:01:31.368 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.027566\", \"end\": \"2018-06-22 13:14:38.144549\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:14:38.116983\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"2d71e99d5d90\", \"stdout_lines\": [\"2d71e99d5d90\"]}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.532) 0:01:31.901 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.050) 0:01:31.952 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.053) 0:01:32.005 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.047) 0:01:32.053 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.053) 0:01:32.106 *********** ", "skipping: [controller-0] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.100) 0:01:32.207 *********** ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.105) 0:01:32.313 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.039) 0:01:32.352 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.039) 0:01:32.392 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.047) 0:01:32.439 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.045) 0:01:32.484 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.045) 0:01:32.530 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.042) 0:01:32.573 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.040) 0:01:32.613 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Friday 22 June 2018 09:14:38 -0400 (0:00:00.040) 0:01:32.654 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"2d71e99d5d90\"], \"delta\": \"0:00:00.031052\", \"end\": \"2018-06-22 13:14:39.535609\", \"rc\": 0, \"start\": \"2018-06-22 13:14:39.504557\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105\\\",\\n \\\"Created\\\": \\\"2018-06-22T13:14:06.054795034Z\\\",\\n \\\"Path\\\": \\\"/entrypoint.sh\\\",\\n \\\"Args\\\": [],\\n \\\"State\\\": {\\n \\\"Status\\\": \\\"running\\\",\\n \\\"Running\\\": true,\\n \\\"Paused\\\": false,\\n \\\"Restarting\\\": false,\\n \\\"OOMKilled\\\": false,\\n \\\"Dead\\\": false,\\n \\\"Pid\\\": 50029,\\n \\\"ExitCode\\\": 0,\\n \\\"Error\\\": \\\"\\\",\\n \\\"StartedAt\\\": \\\"2018-06-22T13:14:06.243843393Z\\\",\\n \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\\n },\\n \\\"Image\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105/resolv.conf\\\",\\n \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105/hostname\\\",\\n \\\"HostsPath\\\": \\\"/var/lib/docker/containers/2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105/hosts\\\",\\n \\\"LogPath\\\": \\\"\\\",\\n \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\\n \\\"RestartCount\\\": 0,\\n \\\"Driver\\\": \\\"overlay2\\\",\\n \\\"MountLabel\\\": \\\"\\\",\\n \\\"ProcessLabel\\\": \\\"\\\",\\n \\\"AppArmorProfile\\\": \\\"\\\",\\n \\\"ExecIDs\\\": null,\\n \\\"HostConfig\\\": {\\n \\\"Binds\\\": [\\n \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\\n \\\"/etc/ceph:/etc/ceph:z\\\",\\n \\\"/var/run/ceph:/var/run/ceph:z\\\",\\n \\\"/etc/localtime:/etc/localtime:ro\\\"\\n ],\\n \\\"ContainerIDFile\\\": \\\"\\\",\\n \\\"LogConfig\\\": {\\n \\\"Type\\\": \\\"journald\\\",\\n \\\"Config\\\": {}\\n },\\n \\\"NetworkMode\\\": \\\"host\\\",\\n \\\"PortBindings\\\": {},\\n \\\"RestartPolicy\\\": {\\n \\\"Name\\\": \\\"no\\\",\\n \\\"MaximumRetryCount\\\": 0\\n },\\n \\\"AutoRemove\\\": true,\\n \\\"VolumeDriver\\\": \\\"\\\",\\n \\\"VolumesFrom\\\": null,\\n \\\"CapAdd\\\": null,\\n \\\"CapDrop\\\": null,\\n \\\"Dns\\\": [],\\n \\\"DnsOptions\\\": [],\\n \\\"DnsSearch\\\": [],\\n \\\"ExtraHosts\\\": null,\\n \\\"GroupAdd\\\": null,\\n \\\"IpcMode\\\": \\\"\\\",\\n \\\"Cgroup\\\": \\\"\\\",\\n \\\"Links\\\": null,\\n \\\"OomScoreAdj\\\": 0,\\n \\\"PidMode\\\": \\\"\\\",\\n \\\"Privileged\\\": false,\\n \\\"PublishAllPorts\\\": false,\\n \\\"ReadonlyRootfs\\\": false,\\n \\\"SecurityOpt\\\": null,\\n \\\"UTSMode\\\": \\\"\\\",\\n \\\"UsernsMode\\\": \\\"\\\",\\n \\\"ShmSize\\\": 67108864,\\n \\\"Runtime\\\": \\\"docker-runc\\\",\\n \\\"ConsoleSize\\\": [\\n 0,\\n 0\\n ],\\n \\\"Isolation\\\": \\\"\\\",\\n \\\"CpuShares\\\": 0,\\n \\\"Memory\\\": 1073741824,\\n \\\"NanoCpus\\\": 0,\\n \\\"CgroupParent\\\": \\\"\\\",\\n \\\"BlkioWeight\\\": 0,\\n \\\"BlkioWeightDevice\\\": null,\\n \\\"BlkioDeviceReadBps\\\": null,\\n \\\"BlkioDeviceWriteBps\\\": null,\\n \\\"BlkioDeviceReadIOps\\\": null,\\n \\\"BlkioDeviceWriteIOps\\\": null,\\n \\\"CpuPeriod\\\": 0,\\n \\\"CpuQuota\\\": 100000,\\n \\\"CpuRealtimePeriod\\\": 0,\\n \\\"CpuRealtimeRuntime\\\": 0,\\n \\\"CpusetCpus\\\": \\\"\\\",\\n \\\"CpusetMems\\\": \\\"\\\",\\n \\\"Devices\\\": [],\\n \\\"DiskQuota\\\": 0,\\n \\\"KernelMemory\\\": 0,\\n \\\"MemoryReservation\\\": 0,\\n \\\"MemorySwap\\\": 2147483648,\\n \\\"MemorySwappiness\\\": -1,\\n \\\"OomKillDisable\\\": false,\\n \\\"PidsLimit\\\": 0,\\n \\\"Ulimits\\\": null,\\n \\\"CpuCount\\\": 0,\\n \\\"CpuPercent\\\": 0,\\n \\\"IOMaximumIOps\\\": 0,\\n \\\"IOMaximumBandwidth\\\": 0\\n },\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558-init/diff:/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff:/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558/work\\\"\\n }\\n },\\n \\\"Mounts\\\": [\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/localtime\\\",\\n \\\"Destination\\\": \\\"/etc/localtime\\\",\\n \\\"Mode\\\": \\\"ro\\\",\\n \\\"RW\\\": false,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"volume\\\",\\n \\\"Name\\\": \\\"d532fedca1b6d8392347154e71bf722e79d74fd82670fc2a49f8d3fc1d56d161\\\",\\n \\\"Source\\\": \\\"/var/lib/docker/volumes/d532fedca1b6d8392347154e71bf722e79d74fd82670fc2a49f8d3fc1d56d161/_data\\\",\\n \\\"Destination\\\": \\\"/etc/ganesha\\\",\\n \\\"Driver\\\": \\\"local\\\",\\n \\\"Mode\\\": \\\"\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/lib/ceph\\\",\\n \\\"Destination\\\": \\\"/var/lib/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/ceph\\\",\\n \\\"Destination\\\": \\\"/etc/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/run/ceph\\\",\\n \\\"Destination\\\": \\\"/var/run/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n }\\n ],\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"controller-0\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": true,\\n \\\"AttachStderr\\\": true,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"IP_VERSION=4\\\",\\n \\\"MON_IP=172.17.3.18\\\",\\n \\\"CLUSTER=ceph\\\",\\n \\\"FSID=53912472-747b-11e8-95a3-5254003d7dcb\\\",\\n \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\\n \\\"CEPH_DAEMON=MON\\\",\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-6\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": null,\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"NetworkSettings\\\": {\\n \\\"Bridge\\\": \\\"\\\",\\n \\\"SandboxID\\\": \\\"b3067360b0180302c4d89730192f368dac349d894129a3da6d44325aa6eb1c61\\\",\\n \\\"HairpinMode\\\": false,\\n \\\"LinkLocalIPv6Address\\\": \\\"\\\",\\n \\\"LinkLocalIPv6PrefixLen\\\": 0,\\n \\\"Ports\\\": {},\\n \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\\n \\\"SecondaryIPAddresses\\\": null,\\n \\\"SecondaryIPv6Addresses\\\": null,\\n \\\"EndpointID\\\": \\\"\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"MacAddress\\\": \\\"\\\",\\n \\\"Networks\\\": {\\n \\\"host\\\": {\\n \\\"IPAMConfig\\\": null,\\n \\\"Links\\\": null,\\n \\\"Aliases\\\": null,\\n \\\"NetworkID\\\": \\\"711dcc9ffeccb18f54b7514bd551f9bdb54b06d72e8dc7b01a2c8e3b296c8f01\\\",\\n \\\"EndpointID\\\": \\\"8ce97204aa7fce9ca1ea5681bede3d64665fa1799f687a4ddc2655cd0e5c0312\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"MacAddress\\\": \\\"\\\"\\n }\\n }\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105\\\",\", \" \\\"Created\\\": \\\"2018-06-22T13:14:06.054795034Z\\\",\", \" \\\"Path\\\": \\\"/entrypoint.sh\\\",\", \" \\\"Args\\\": [],\", \" \\\"State\\\": {\", \" \\\"Status\\\": \\\"running\\\",\", \" \\\"Running\\\": true,\", \" \\\"Paused\\\": false,\", \" \\\"Restarting\\\": false,\", \" \\\"OOMKilled\\\": false,\", \" \\\"Dead\\\": false,\", \" \\\"Pid\\\": 50029,\", \" \\\"ExitCode\\\": 0,\", \" \\\"Error\\\": \\\"\\\",\", \" \\\"StartedAt\\\": \\\"2018-06-22T13:14:06.243843393Z\\\",\", \" \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\", \" },\", \" \\\"Image\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105/resolv.conf\\\",\", \" \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105/hostname\\\",\", \" \\\"HostsPath\\\": \\\"/var/lib/docker/containers/2d71e99d5d902f3e448ef5b4f257c523779fe6fb0b8a806ce828f91360ec5105/hosts\\\",\", \" \\\"LogPath\\\": \\\"\\\",\", \" \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\", \" \\\"RestartCount\\\": 0,\", \" \\\"Driver\\\": \\\"overlay2\\\",\", \" \\\"MountLabel\\\": \\\"\\\",\", \" \\\"ProcessLabel\\\": \\\"\\\",\", \" \\\"AppArmorProfile\\\": \\\"\\\",\", \" \\\"ExecIDs\\\": null,\", \" \\\"HostConfig\\\": {\", \" \\\"Binds\\\": [\", \" \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\", \" \\\"/etc/ceph:/etc/ceph:z\\\",\", \" \\\"/var/run/ceph:/var/run/ceph:z\\\",\", \" \\\"/etc/localtime:/etc/localtime:ro\\\"\", \" ],\", \" \\\"ContainerIDFile\\\": \\\"\\\",\", \" \\\"LogConfig\\\": {\", \" \\\"Type\\\": \\\"journald\\\",\", \" \\\"Config\\\": {}\", \" },\", \" \\\"NetworkMode\\\": \\\"host\\\",\", \" \\\"PortBindings\\\": {},\", \" \\\"RestartPolicy\\\": {\", \" \\\"Name\\\": \\\"no\\\",\", \" \\\"MaximumRetryCount\\\": 0\", \" },\", \" \\\"AutoRemove\\\": true,\", \" \\\"VolumeDriver\\\": \\\"\\\",\", \" \\\"VolumesFrom\\\": null,\", \" \\\"CapAdd\\\": null,\", \" \\\"CapDrop\\\": null,\", \" \\\"Dns\\\": [],\", \" \\\"DnsOptions\\\": [],\", \" \\\"DnsSearch\\\": [],\", \" \\\"ExtraHosts\\\": null,\", \" \\\"GroupAdd\\\": null,\", \" \\\"IpcMode\\\": \\\"\\\",\", \" \\\"Cgroup\\\": \\\"\\\",\", \" \\\"Links\\\": null,\", \" \\\"OomScoreAdj\\\": 0,\", \" \\\"PidMode\\\": \\\"\\\",\", \" \\\"Privileged\\\": false,\", \" \\\"PublishAllPorts\\\": false,\", \" \\\"ReadonlyRootfs\\\": false,\", \" \\\"SecurityOpt\\\": null,\", \" \\\"UTSMode\\\": \\\"\\\",\", \" \\\"UsernsMode\\\": \\\"\\\",\", \" \\\"ShmSize\\\": 67108864,\", \" \\\"Runtime\\\": \\\"docker-runc\\\",\", \" \\\"ConsoleSize\\\": [\", \" 0,\", \" 0\", \" ],\", \" \\\"Isolation\\\": \\\"\\\",\", \" \\\"CpuShares\\\": 0,\", \" \\\"Memory\\\": 1073741824,\", \" \\\"NanoCpus\\\": 0,\", \" \\\"CgroupParent\\\": \\\"\\\",\", \" \\\"BlkioWeight\\\": 0,\", \" \\\"BlkioWeightDevice\\\": null,\", \" \\\"BlkioDeviceReadBps\\\": null,\", \" \\\"BlkioDeviceWriteBps\\\": null,\", \" \\\"BlkioDeviceReadIOps\\\": null,\", \" \\\"BlkioDeviceWriteIOps\\\": null,\", \" \\\"CpuPeriod\\\": 0,\", \" \\\"CpuQuota\\\": 100000,\", \" \\\"CpuRealtimePeriod\\\": 0,\", \" \\\"CpuRealtimeRuntime\\\": 0,\", \" \\\"CpusetCpus\\\": \\\"\\\",\", \" \\\"CpusetMems\\\": \\\"\\\",\", \" \\\"Devices\\\": [],\", \" \\\"DiskQuota\\\": 0,\", \" \\\"KernelMemory\\\": 0,\", \" \\\"MemoryReservation\\\": 0,\", \" \\\"MemorySwap\\\": 2147483648,\", \" \\\"MemorySwappiness\\\": -1,\", \" \\\"OomKillDisable\\\": false,\", \" \\\"PidsLimit\\\": 0,\", \" \\\"Ulimits\\\": null,\", \" \\\"CpuCount\\\": 0,\", \" \\\"CpuPercent\\\": 0,\", \" \\\"IOMaximumIOps\\\": 0,\", \" \\\"IOMaximumBandwidth\\\": 0\", \" },\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558-init/diff:/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff:/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/acafcedc57179c8b1eadea659bf90e0f57285d4c5846b590b8ff9971095fc558/work\\\"\", \" }\", \" },\", \" \\\"Mounts\\\": [\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/localtime\\\",\", \" \\\"Destination\\\": \\\"/etc/localtime\\\",\", \" \\\"Mode\\\": \\\"ro\\\",\", \" \\\"RW\\\": false,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"volume\\\",\", \" \\\"Name\\\": \\\"d532fedca1b6d8392347154e71bf722e79d74fd82670fc2a49f8d3fc1d56d161\\\",\", \" \\\"Source\\\": \\\"/var/lib/docker/volumes/d532fedca1b6d8392347154e71bf722e79d74fd82670fc2a49f8d3fc1d56d161/_data\\\",\", \" \\\"Destination\\\": \\\"/etc/ganesha\\\",\", \" \\\"Driver\\\": \\\"local\\\",\", \" \\\"Mode\\\": \\\"\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/ceph\\\",\", \" \\\"Destination\\\": \\\"/etc/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/run/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/run/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" }\", \" ],\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"controller-0\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": true,\", \" \\\"AttachStderr\\\": true,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"IP_VERSION=4\\\",\", \" \\\"MON_IP=172.17.3.18\\\",\", \" \\\"CLUSTER=ceph\\\",\", \" \\\"FSID=53912472-747b-11e8-95a3-5254003d7dcb\\\",\", \" \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\", \" \\\"CEPH_DAEMON=MON\\\",\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-6\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": null,\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"NetworkSettings\\\": {\", \" \\\"Bridge\\\": \\\"\\\",\", \" \\\"SandboxID\\\": \\\"b3067360b0180302c4d89730192f368dac349d894129a3da6d44325aa6eb1c61\\\",\", \" \\\"HairpinMode\\\": false,\", \" \\\"LinkLocalIPv6Address\\\": \\\"\\\",\", \" \\\"LinkLocalIPv6PrefixLen\\\": 0,\", \" \\\"Ports\\\": {},\", \" \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\", \" \\\"SecondaryIPAddresses\\\": null,\", \" \\\"SecondaryIPv6Addresses\\\": null,\", \" \\\"EndpointID\\\": \\\"\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"MacAddress\\\": \\\"\\\",\", \" \\\"Networks\\\": {\", \" \\\"host\\\": {\", \" \\\"IPAMConfig\\\": null,\", \" \\\"Links\\\": null,\", \" \\\"Aliases\\\": null,\", \" \\\"NetworkID\\\": \\\"711dcc9ffeccb18f54b7514bd551f9bdb54b06d72e8dc7b01a2c8e3b296c8f01\\\",\", \" \\\"EndpointID\\\": \\\"8ce97204aa7fce9ca1ea5681bede3d64665fa1799f687a4ddc2655cd0e5c0312\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"MacAddress\\\": \\\"\\\"\", \" }\", \" }\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Friday 22 June 2018 09:14:39 -0400 (0:00:00.660) 0:01:33.315 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Friday 22 June 2018 09:14:39 -0400 (0:00:00.042) 0:01:33.357 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Friday 22 June 2018 09:14:39 -0400 (0:00:00.042) 0:01:33.400 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Friday 22 June 2018 09:14:39 -0400 (0:00:00.044) 0:01:33.444 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Friday 22 June 2018 09:14:39 -0400 (0:00:00.047) 0:01:33.491 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Friday 22 June 2018 09:14:39 -0400 (0:00:00.041) 0:01:33.533 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Friday 22 June 2018 09:14:39 -0400 (0:00:00.043) 0:01:33.576 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\"], \"delta\": \"0:00:00.028122\", \"end\": \"2018-06-22 13:14:40.436694\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:14:40.408572\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Friday 22 June 2018 09:14:40 -0400 (0:00:00.633) 0:01:34.209 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Friday 22 June 2018 09:14:40 -0400 (0:00:00.044) 0:01:34.254 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Friday 22 June 2018 09:14:40 -0400 (0:00:00.046) 0:01:34.300 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Friday 22 June 2018 09:14:40 -0400 (0:00:00.043) 0:01:34.344 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Friday 22 June 2018 09:14:40 -0400 (0:00:00.049) 0:01:34.394 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Friday 22 June 2018 09:14:40 -0400 (0:00:00.045) 0:01:34.439 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Friday 22 June 2018 09:14:40 -0400 (0:00:00.130) 0:01:34.569 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_mon_image_repodigest_before_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Friday 22 June 2018 09:14:40 -0400 (0:00:00.085) 0:01:34.655 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Friday 22 June 2018 09:14:40 -0400 (0:00:00.045) 0:01:34.701 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Friday 22 June 2018 09:14:40 -0400 (0:00:00.048) 0:01:34.749 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Friday 22 June 2018 09:14:41 -0400 (0:00:00.046) 0:01:34.795 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Friday 22 June 2018 09:14:41 -0400 (0:00:00.049) 0:01:34.845 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Friday 22 June 2018 09:14:41 -0400 (0:00:00.045) 0:01:34.890 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Friday 22 June 2018 09:14:41 -0400 (0:00:00.046) 0:01:34.937 *********** ", "ok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.036769\", \"end\": \"2018-06-22 13:14:41.717045\", \"rc\": 0, \"start\": \"2018-06-22 13:14:41.680276\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Image is up to date for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Image is up to date for 192.168.24.1:8787/rhceph:3-6\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Friday 22 June 2018 09:14:41 -0400 (0:00:00.546) 0:01:35.483 *********** ", "changed: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.029313\", \"end\": \"2018-06-22 13:14:42.243271\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:14:42.213958\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/1843f5ba2fd3214846eb88df558df4b1de33c037de5038dcbc923aa3191b597d/diff:/var/lib/docker/overlay2/4847c6f9051219ec8cb8e000d1501580e783cd563bd59a04c8b2831356c97010/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/88cd8cc0d0ec29fc2f82485e8405003bf1d6884b0633f85380142a4cdca48725/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Friday 22 June 2018 09:14:42 -0400 (0:00:00.531) 0:01:36.015 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Friday 22 June 2018 09:14:42 -0400 (0:00:00.078) 0:01:36.094 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Friday 22 June 2018 09:14:42 -0400 (0:00:00.053) 0:01:36.148 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Friday 22 June 2018 09:14:42 -0400 (0:00:00.044) 0:01:36.192 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Friday 22 June 2018 09:14:42 -0400 (0:00:00.043) 0:01:36.235 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Friday 22 June 2018 09:14:42 -0400 (0:00:00.055) 0:01:36.291 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Friday 22 June 2018 09:14:42 -0400 (0:00:00.050) 0:01:36.342 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Friday 22 June 2018 09:14:42 -0400 (0:00:00.046) 0:01:36.389 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Friday 22 June 2018 09:14:42 -0400 (0:00:00.049) 0:01:36.438 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Friday 22 June 2018 09:14:42 -0400 (0:00:00.044) 0:01:36.483 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Friday 22 June 2018 09:14:42 -0400 (0:00:00.052) 0:01:36.535 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Friday 22 June 2018 09:14:42 -0400 (0:00:00.044) 0:01:36.579 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Friday 22 June 2018 09:14:42 -0400 (0:00:00.043) 0:01:36.623 *********** ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.571111\", \"end\": \"2018-06-22 13:14:43.929929\", \"rc\": 0, \"start\": \"2018-06-22 13:14:43.358818\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Friday 22 June 2018 09:14:43 -0400 (0:00:01.079) 0:01:37.702 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Friday 22 June 2018 09:14:44 -0400 (0:00:00.074) 0:01:37.777 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Friday 22 June 2018 09:14:44 -0400 (0:00:00.047) 0:01:37.825 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Friday 22 June 2018 09:14:44 -0400 (0:00:00.049) 0:01:37.874 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Friday 22 June 2018 09:14:44 -0400 (0:00:00.076) 0:01:37.951 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Friday 22 June 2018 09:14:44 -0400 (0:00:00.052) 0:01:38.003 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Friday 22 June 2018 09:14:44 -0400 (0:00:00.051) 0:01:38.055 *********** ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Friday 22 June 2018 09:14:46 -0400 (0:00:02.332) 0:01:40.387 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Friday 22 June 2018 09:14:46 -0400 (0:00:00.049) 0:01:40.437 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Friday 22 June 2018 09:14:46 -0400 (0:00:00.049) 0:01:40.486 *********** ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 80, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Friday 22 June 2018 09:14:46 -0400 (0:00:00.201) 0:01:40.687 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Friday 22 June 2018 09:14:46 -0400 (0:00:00.053) 0:01:40.740 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Friday 22 June 2018 09:14:47 -0400 (0:00:00.046) 0:01:40.787 *********** ", "changed: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Friday 22 June 2018 09:14:47 -0400 (0:00:00.521) 0:01:41.309 *********** ", "ok: [controller-0] => {\"changed\": false, \"checksum\": \"8376233e5a1bc87f2c4fab91f94a8b75f6c6a2f6\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"0f740ab4fb6329f001a8e004a4e1d994\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 761, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673287.59-135812560192411/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Friday 22 June 2018 09:14:49 -0400 (0:00:01.705) 0:01:43.015 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : set_fact docker_exec_cmd] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:2", "Friday 22 June 2018 09:14:49 -0400 (0:00:00.049) 0:01:43.064 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd_mgr\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-mgr : create mgr directory] *****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:2", "Friday 22 June 2018 09:14:49 -0400 (0:00:00.198) 0:01:43.263 *********** ", "ok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-mgr : copy ceph keyring(s) if needed] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:10", "Friday 22 June 2018 09:14:50 -0400 (0:00:00.615) 0:01:43.879 *********** ", "changed: [controller-0] => (item={u'dest': u'/var/lib/ceph/mgr/ceph-controller-0/keyring', u'name': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"f1eb3e81a4f49f68787b67580eb8b9601f3e1e36\", \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"name\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"md5sum\": \"27b1ed102ad44a0a24aa2cc10f78f0d3\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673290.16-208677308831713/source\", \"state\": \"file\", \"uid\": 167}", "skipping: [controller-0] => (item={u'dest': u'/etc/ceph/ceph.client.admin.keyring', u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"dest\": \"/etc/ceph/ceph.client.admin.keyring\", \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : set mgr key permissions] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:24", "Friday 22 June 2018 09:14:52 -0400 (0:00:02.600) 0:01:46.480 *********** ", "ok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"state\": \"file\", \"uid\": 167}", "", "TASK [ceph-mgr : install ceph-mgr package on RedHat or SUSE] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:2", "Friday 22 June 2018 09:14:53 -0400 (0:00:00.518) 0:01:46.998 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : install ceph mgr for debian] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:9", "Friday 22 June 2018 09:14:53 -0400 (0:00:00.045) 0:01:47.043 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : ensure systemd service override directory exists] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:17", "Friday 22 June 2018 09:14:53 -0400 (0:00:00.044) 0:01:47.088 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : add ceph-mgr systemd service overrides] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:25", "Friday 22 June 2018 09:14:53 -0400 (0:00:00.046) 0:01:47.135 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : start and add that the mgr service to the init sequence] ******", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:35", "Friday 22 June 2018 09:14:53 -0400 (0:00:00.044) 0:01:47.179 *********** ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:2", "Friday 22 June 2018 09:14:53 -0400 (0:00:00.047) 0:01:47.226 *********** ", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"fb2f3078fffe963a7fd0473c7b908931939d5c73\", \"dest\": \"/etc/systemd/system/ceph-mgr@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"7b527fb0a44d25cf825cb2b6fcb2b07e\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 733, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673293.6-41326754079202/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mgr : systemd start mgr container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:13", "Friday 22 June 2018 09:14:56 -0400 (0:00:02.875) 0:01:50.102 *********** ", "ok: [controller-0] => {\"changed\": false, \"enabled\": true, \"name\": \"ceph-mgr@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"systemd-journald.socket basic.target system-ceph\\\\x5cx2dmgr.slice docker.service\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Manager\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro -e CLUSTER=ceph -e CEPH_DAEMON=MGR -e MGR_DASHBOARD=0 --name=ceph-mgr-controller-0 192.168.24.1:8787/rhceph:3-6 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mgr@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mgr@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127793\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127793\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mgr@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmgr.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmgr.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-mgr : get enabled modules from ceph-mgr] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:19", "Friday 22 June 2018 09:14:57 -0400 (0:00:00.805) 0:01:50.907 *********** ", "changed: [controller-0 -> 192.168.24.8] => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"--format\", \"json\", \"mgr\", \"module\", \"ls\"], \"delta\": \"0:00:00.389752\", \"end\": \"2018-06-22 13:14:58.094029\", \"rc\": 0, \"start\": \"2018-06-22 13:14:57.704277\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"enabled_modules\\\":[\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[]}\", \"stdout_lines\": [\"\", \"{\\\"enabled_modules\\\":[\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[]}\"]}", "", "TASK [ceph-mgr : set _ceph_mgr_modules fact] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:26", "Friday 22 June 2018 09:14:58 -0400 (0:00:00.954) 0:01:51.862 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_ceph_mgr_modules\": {\"disabled_modules\": [], \"enabled_modules\": [\"restful\", \"status\"]}}, \"changed\": false}", "", "TASK [ceph-mgr : disable ceph mgr enabled modules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:30", "Friday 22 June 2018 09:14:58 -0400 (0:00:00.105) 0:01:51.967 *********** ", "changed: [controller-0 -> 192.168.24.8] => (item=restful) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"restful\"], \"delta\": \"0:00:01.349993\", \"end\": \"2018-06-22 13:15:00.100317\", \"item\": \"restful\", \"rc\": 0, \"start\": \"2018-06-22 13:14:58.750324\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "skipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : add modules to ceph-mgr] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:41", "Friday 22 June 2018 09:15:00 -0400 (0:00:01.948) 0:01:53.916 *********** ", "skipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Friday 22 June 2018 09:15:00 -0400 (0:00:00.027) 0:01:53.943 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Friday 22 June 2018 09:15:00 -0400 (0:00:00.064) 0:01:54.008 *********** ", "ok: [controller-0] => {\"changed\": false, \"checksum\": \"f36b3460f6762a853a3dab1958afb7d83ff8f234\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"mode\": \"0750\", \"owner\": \"root\", \"path\": \"/tmp/restart_mgr_daemon.sh\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 570, \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Friday 22 June 2018 09:15:02 -0400 (0:00:01.995) 0:01:56.003 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Friday 22 June 2018 09:15:02 -0400 (0:00:00.083) 0:01:56.087 *********** ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Friday 22 June 2018 09:15:02 -0400 (0:00:00.126) 0:01:56.213 *********** ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [set ceph manager install 'Complete'] *************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:129", "Friday 22 June 2018 09:15:02 -0400 (0:00:00.093) 0:01:56.306 *********** ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"end\": \"20180622091502Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "PLAY [osds] ********************************************************************", "", "TASK [set ceph osd install 'In Progress'] **************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:141", "Friday 22 June 2018 09:15:02 -0400 (0:00:00.146) 0:01:56.453 *********** ", "ok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"start\": \"20180622091502Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Friday 22 June 2018 09:15:02 -0400 (0:00:00.068) 0:01:56.521 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Friday 22 June 2018 09:15:02 -0400 (0:00:00.040) 0:01:56.562 *********** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-osd-ceph-0\"], \"delta\": \"0:00:00.024219\", \"end\": \"2018-06-22 13:15:03.307661\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:15:03.283442\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Friday 22 June 2018 09:15:03 -0400 (0:00:00.498) 0:01:57.060 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Friday 22 June 2018 09:15:03 -0400 (0:00:00.042) 0:01:57.103 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Friday 22 June 2018 09:15:03 -0400 (0:00:00.039) 0:01:57.143 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Friday 22 June 2018 09:15:03 -0400 (0:00:00.040) 0:01:57.183 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Friday 22 June 2018 09:15:03 -0400 (0:00:00.039) 0:01:57.222 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Friday 22 June 2018 09:15:03 -0400 (0:00:00.041) 0:01:57.263 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Friday 22 June 2018 09:15:03 -0400 (0:00:00.046) 0:01:57.310 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Friday 22 June 2018 09:15:03 -0400 (0:00:00.040) 0:01:57.350 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Friday 22 June 2018 09:15:03 -0400 (0:00:00.039) 0:01:57.389 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Friday 22 June 2018 09:15:03 -0400 (0:00:00.037) 0:01:57.427 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Friday 22 June 2018 09:15:03 -0400 (0:00:00.036) 0:01:57.463 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Friday 22 June 2018 09:15:03 -0400 (0:00:00.035) 0:01:57.498 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Friday 22 June 2018 09:15:03 -0400 (0:00:00.038) 0:01:57.537 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Friday 22 June 2018 09:15:03 -0400 (0:00:00.197) 0:01:57.735 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Friday 22 June 2018 09:15:04 -0400 (0:00:00.040) 0:01:57.775 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Friday 22 June 2018 09:15:04 -0400 (0:00:00.039) 0:01:57.815 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Friday 22 June 2018 09:15:04 -0400 (0:00:00.037) 0:01:57.852 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Friday 22 June 2018 09:15:04 -0400 (0:00:00.043) 0:01:57.896 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Friday 22 June 2018 09:15:04 -0400 (0:00:00.040) 0:01:57.937 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Friday 22 June 2018 09:15:04 -0400 (0:00:00.045) 0:01:57.982 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Friday 22 June 2018 09:15:04 -0400 (0:00:00.038) 0:01:58.021 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Friday 22 June 2018 09:15:04 -0400 (0:00:00.038) 0:01:58.060 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Friday 22 June 2018 09:15:04 -0400 (0:00:00.039) 0:01:58.099 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Friday 22 June 2018 09:15:04 -0400 (0:00:00.037) 0:01:58.137 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Friday 22 June 2018 09:15:04 -0400 (0:00:00.036) 0:01:58.174 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Friday 22 June 2018 09:15:04 -0400 (0:00:00.042) 0:01:58.216 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Friday 22 June 2018 09:15:04 -0400 (0:00:00.040) 0:01:58.257 *********** ", "ok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Friday 22 June 2018 09:15:04 -0400 (0:00:00.471) 0:01:58.729 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Friday 22 June 2018 09:15:05 -0400 (0:00:00.068) 0:01:58.798 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Friday 22 June 2018 09:15:05 -0400 (0:00:00.066) 0:01:58.864 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Friday 22 June 2018 09:15:05 -0400 (0:00:00.069) 0:01:58.934 *********** ", "ok: [ceph-0 -> 192.168.24.8] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Friday 22 June 2018 09:15:05 -0400 (0:00:00.129) 0:01:59.063 *********** ", "ok: [ceph-0 -> 192.168.24.8] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.325414\", \"end\": \"2018-06-22 13:15:06.137908\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:15:05.812494\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"53912472-747b-11e8-95a3-5254003d7dcb\", \"stdout_lines\": [\"53912472-747b-11e8-95a3-5254003d7dcb\"]}", "", "TASK [ceph-defaults : check if /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Friday 22 June 2018 09:15:06 -0400 (0:00:00.843) 0:01:59.907 *********** ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Friday 22 June 2018 09:15:06 -0400 (0:00:00.197) 0:02:00.104 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Friday 22 June 2018 09:15:06 -0400 (0:00:00.047) 0:02:00.152 *********** ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 80, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Friday 22 June 2018 09:15:06 -0400 (0:00:00.197) 0:02:00.349 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"fsid\": \"53912472-747b-11e8-95a3-5254003d7dcb\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Friday 22 June 2018 09:15:06 -0400 (0:00:00.072) 0:02:00.422 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85", "Friday 22 June 2018 09:15:06 -0400 (0:00:00.068) 0:02:00.490 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96", "Friday 22 June 2018 09:15:06 -0400 (0:00:00.040) 0:02:00.530 *********** ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 53912472-747b-11e8-95a3-5254003d7dcb | tee /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105", "Friday 22 June 2018 09:15:06 -0400 (0:00:00.194) 0:02:00.724 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117", "Friday 22 June 2018 09:15:06 -0400 (0:00:00.038) 0:02:00.763 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123", "Friday 22 June 2018 09:15:07 -0400 (0:00:00.039) 0:02:00.802 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"mds_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129", "Friday 22 June 2018 09:15:07 -0400 (0:00:00.075) 0:02:00.878 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135", "Friday 22 June 2018 09:15:07 -0400 (0:00:00.049) 0:02:00.927 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Friday 22 June 2018 09:15:07 -0400 (0:00:00.068) 0:02:00.996 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Friday 22 June 2018 09:15:07 -0400 (0:00:00.065) 0:02:01.061 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Friday 22 June 2018 09:15:07 -0400 (0:00:00.067) 0:02:01.129 *********** ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.003396\", \"end\": \"2018-06-22 13:15:07.879754\", \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-06-22 13:15:07.876358\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166", "Friday 22 June 2018 09:15:07 -0400 (0:00:00.512) 0:02:01.641 *********** ", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-06-22 13:15:07.879754', '_ansible_no_log': False, u'stdout': u'/dev/vdb', u'cmd': [u'readlink', u'-f', u'/dev/vdb'], u'rc': 0, 'item': u'/dev/vdb', u'delta': u'0:00:00.003396', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdb', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdb'], u'start': u'2018-06-22 13:15:07.876358', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdb\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.003396\", \"end\": \"2018-06-22 13:15:07.879754\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-06-22 13:15:07.876358\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175", "Friday 22 June 2018 09:15:07 -0400 (0:00:00.090) 0:02:01.732 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\"]}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183", "Friday 22 June 2018 09:15:08 -0400 (0:00:00.080) 0:02:01.812 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Friday 22 June 2018 09:15:08 -0400 (0:00:00.044) 0:02:01.857 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Friday 22 June 2018 09:15:08 -0400 (0:00:00.043) 0:02:01.900 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Friday 22 June 2018 09:15:08 -0400 (0:00:00.042) 0:02:01.943 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Friday 22 June 2018 09:15:08 -0400 (0:00:00.042) 0:02:01.985 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Friday 22 June 2018 09:15:08 -0400 (0:00:00.080) 0:02:02.065 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Friday 22 June 2018 09:15:08 -0400 (0:00:00.067) 0:02:02.133 *********** ", "changed: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Friday 22 June 2018 09:15:13 -0400 (0:00:05.077) 0:02:07.211 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Friday 22 June 2018 09:15:13 -0400 (0:00:00.042) 0:02:07.253 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Friday 22 June 2018 09:15:13 -0400 (0:00:00.039) 0:02:07.292 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Friday 22 June 2018 09:15:13 -0400 (0:00:00.038) 0:02:07.331 *********** ", "ok: [ceph-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [ceph-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Friday 22 June 2018 09:15:14 -0400 (0:00:00.875) 0:02:08.207 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Friday 22 June 2018 09:15:14 -0400 (0:00:00.068) 0:02:08.276 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Friday 22 June 2018 09:15:14 -0400 (0:00:00.038) 0:02:08.315 *********** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.024004\", \"end\": \"2018-06-22 13:15:15.045795\", \"rc\": 0, \"start\": \"2018-06-22 13:15:15.021791\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Friday 22 June 2018 09:15:15 -0400 (0:00:00.486) 0:02:08.801 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Friday 22 June 2018 09:15:15 -0400 (0:00:00.170) 0:02:08.972 *********** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-ceph-0\"], \"delta\": \"0:00:00.026604\", \"end\": \"2018-06-22 13:15:15.819480\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:15:15.792876\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Friday 22 June 2018 09:15:15 -0400 (0:00:00.601) 0:02:09.573 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Friday 22 June 2018 09:15:15 -0400 (0:00:00.090) 0:02:09.664 *********** ", "ok: [ceph-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Friday 22 June 2018 09:15:16 -0400 (0:00:00.220) 0:02:09.884 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Friday 22 June 2018 09:15:16 -0400 (0:00:00.187) 0:02:10.072 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Friday 22 June 2018 09:15:16 -0400 (0:00:00.187) 0:02:10.259 *********** ", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1529673251.412, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"793b49d83f132a70fc67d6c0569cfa8c71650741\", \"ctime\": 1529673251.412, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 29440356, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673251.412, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1529673251.858, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"dae692cfee0fa0a32ffaad10f7d24e310a009db9\", \"ctime\": 1529673251.858, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 29440357, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673251.858, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1529673252.32, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"d8a7f9eb9d9dc0395da75fc7759797ea97e335aa\", \"ctime\": 1529673252.32, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 46404843, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673252.32, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1529673252.774, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"9613a61f8c01ce2de5a65853e6a5574e32ab15c0\", \"ctime\": 1529673252.774, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 51235195, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673252.774, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1529673253.23, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"11de432a77f2de2b2705ea5780f568345ba62116\", \"ctime\": 1529673253.23, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 56054668, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673253.23, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1529673253.677, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"fa627b4b6c0e4d6b86f16984405cd43c6dd3021c\", \"ctime\": 1529673253.677, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 58720433, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673253.677, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1529673290.805, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"f1eb3e81a4f49f68787b67580eb8b9601f3e1e36\", \"ctime\": 1529673255.881, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 29440358, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673255.881, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Friday 22 June 2018 09:15:17 -0400 (0:00:01.315) 0:02:11.575 *********** ", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673251.412, u'block_size': 4096, u'inode': 29440356, u'isgid': False, u'size': 159, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'xusr': False, u'atime': 1529673251.412, u'isdir': False, u'ctime': 1529673251.412, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'793b49d83f132a70fc67d6c0569cfa8c71650741', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1529673251.412, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"793b49d83f132a70fc67d6c0569cfa8c71650741\", \"ctime\": 1529673251.412, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 29440356, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673251.412, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673251.858, u'block_size': 4096, u'inode': 29440357, u'isgid': False, u'size': 688, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'xusr': False, u'atime': 1529673251.858, u'isdir': False, u'ctime': 1529673251.858, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'dae692cfee0fa0a32ffaad10f7d24e310a009db9', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1529673251.858, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"dae692cfee0fa0a32ffaad10f7d24e310a009db9\", \"ctime\": 1529673251.858, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 29440357, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673251.858, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673252.32, u'block_size': 4096, u'inode': 46404843, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'xusr': False, u'atime': 1529673252.32, u'isdir': False, u'ctime': 1529673252.32, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'd8a7f9eb9d9dc0395da75fc7759797ea97e335aa', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1529673252.32, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"d8a7f9eb9d9dc0395da75fc7759797ea97e335aa\", \"ctime\": 1529673252.32, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 46404843, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673252.32, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673252.774, u'block_size': 4096, u'inode': 51235195, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'xusr': False, u'atime': 1529673252.774, u'isdir': False, u'ctime': 1529673252.774, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'9613a61f8c01ce2de5a65853e6a5574e32ab15c0', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1529673252.774, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"9613a61f8c01ce2de5a65853e6a5574e32ab15c0\", \"ctime\": 1529673252.774, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 51235195, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673252.774, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673253.23, u'block_size': 4096, u'inode': 56054668, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'xusr': False, u'atime': 1529673253.23, u'isdir': False, u'ctime': 1529673253.23, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'11de432a77f2de2b2705ea5780f568345ba62116', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1529673253.23, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"11de432a77f2de2b2705ea5780f568345ba62116\", \"ctime\": 1529673253.23, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 56054668, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673253.23, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673253.677, u'block_size': 4096, u'inode': 58720433, u'isgid': False, u'size': 113, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'xusr': False, u'atime': 1529673253.677, u'isdir': False, u'ctime': 1529673253.677, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'fa627b4b6c0e4d6b86f16984405cd43c6dd3021c', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1529673253.677, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"fa627b4b6c0e4d6b86f16984405cd43c6dd3021c\", \"ctime\": 1529673253.677, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 58720433, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673253.677, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'isuid': False, u'uid': 988, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1529673255.881, u'block_size': 4096, u'inode': 29440358, u'isgid': False, u'size': 67, u'wgrp': False, u'executable': False, u'charset': u'unknown', u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 985, u'ischr': False, u'wusr': True, u'writeable': True, u'mimetype': u'unknown', u'blocks': 8, u'xoth': False, u'islnk': False, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1529673290.805, u'isdir': False, u'ctime': 1529673255.881, u'isblk': False, u'xgrp': False, u'dev': 64769, u'roth': True, u'isfifo': False, u'mode': u'0644', u'checksum': u'f1eb3e81a4f49f68787b67580eb8b9601f3e1e36', u'rusr': True, u'attributes': []}, u'changed': False, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, 'failed': False}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1529673290.805, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"f1eb3e81a4f49f68787b67580eb8b9601f3e1e36\", \"ctime\": 1529673255.881, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 985, \"gr_name\": \"mistral\", \"inode\": 29440358, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1529673255.881, \"nlink\": 1, \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 988, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.265) 0:02:11.840 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.039) 0:02:11.880 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.038) 0:02:11.918 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.044) 0:02:11.962 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.050) 0:02:12.013 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.043) 0:02:12.056 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.042) 0:02:12.099 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.042) 0:02:12.142 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.041) 0:02:12.183 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.040) 0:02:12.223 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.055) 0:02:12.279 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.171) 0:02:12.451 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.038) 0:02:12.489 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.039) 0:02:12.529 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.038) 0:02:12.567 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.039) 0:02:12.606 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.037) 0:02:12.644 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.051) 0:02:12.696 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Friday 22 June 2018 09:15:18 -0400 (0:00:00.041) 0:02:12.738 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Friday 22 June 2018 09:15:19 -0400 (0:00:00.041) 0:02:12.779 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Friday 22 June 2018 09:15:19 -0400 (0:00:00.041) 0:02:12.820 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Friday 22 June 2018 09:15:19 -0400 (0:00:00.040) 0:02:12.860 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Friday 22 June 2018 09:15:19 -0400 (0:00:00.039) 0:02:12.899 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Friday 22 June 2018 09:15:19 -0400 (0:00:00.048) 0:02:12.948 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Friday 22 June 2018 09:15:19 -0400 (0:00:00.043) 0:02:12.991 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Friday 22 June 2018 09:15:19 -0400 (0:00:00.038) 0:02:13.030 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Friday 22 June 2018 09:15:19 -0400 (0:00:00.038) 0:02:13.069 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Friday 22 June 2018 09:15:19 -0400 (0:00:00.038) 0:02:13.107 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Friday 22 June 2018 09:15:19 -0400 (0:00:00.038) 0:02:13.145 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Friday 22 June 2018 09:15:19 -0400 (0:00:00.043) 0:02:13.188 *********** ", "ok: [ceph-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:16.249972\", \"end\": \"2018-06-22 13:15:36.140819\", \"rc\": 0, \"start\": \"2018-06-22 13:15:19.890847\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Verifying Checksum\\nb8aa42cec17a: Download complete\\n9a32f102e677: Verifying Checksum\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Verifying Checksum\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Verifying Checksum\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Friday 22 June 2018 09:15:36 -0400 (0:00:16.716) 0:02:29.905 *********** ", "changed: [ceph-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.024248\", \"end\": \"2018-06-22 13:15:36.638861\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:15:36.614613\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/c3baf43ba63707bde52d6ad9875b8992dcd03576bd8e11611ec48eabc599b419/diff:/var/lib/docker/overlay2/0589eead877a238570964f90f9ccd2a9e5b5e3bfb54b187631f8d5930e5c180d/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8956de1a6cc0965320854f422c6c844143e0985b70a1be35de566f04a1040756/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8956de1a6cc0965320854f422c6c844143e0985b70a1be35de566f04a1040756/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8956de1a6cc0965320854f422c6c844143e0985b70a1be35de566f04a1040756/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/c3baf43ba63707bde52d6ad9875b8992dcd03576bd8e11611ec48eabc599b419/diff:/var/lib/docker/overlay2/0589eead877a238570964f90f9ccd2a9e5b5e3bfb54b187631f8d5930e5c180d/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8956de1a6cc0965320854f422c6c844143e0985b70a1be35de566f04a1040756/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8956de1a6cc0965320854f422c6c844143e0985b70a1be35de566f04a1040756/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8956de1a6cc0965320854f422c6c844143e0985b70a1be35de566f04a1040756/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Friday 22 June 2018 09:15:36 -0400 (0:00:00.501) 0:02:30.407 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Friday 22 June 2018 09:15:36 -0400 (0:00:00.076) 0:02:30.483 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Friday 22 June 2018 09:15:36 -0400 (0:00:00.042) 0:02:30.526 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Friday 22 June 2018 09:15:36 -0400 (0:00:00.050) 0:02:30.576 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Friday 22 June 2018 09:15:36 -0400 (0:00:00.043) 0:02:30.620 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Friday 22 June 2018 09:15:36 -0400 (0:00:00.042) 0:02:30.663 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Friday 22 June 2018 09:15:36 -0400 (0:00:00.043) 0:02:30.706 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Friday 22 June 2018 09:15:36 -0400 (0:00:00.042) 0:02:30.748 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Friday 22 June 2018 09:15:37 -0400 (0:00:00.045) 0:02:30.793 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Friday 22 June 2018 09:15:37 -0400 (0:00:00.046) 0:02:30.840 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Friday 22 June 2018 09:15:37 -0400 (0:00:00.044) 0:02:30.884 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Friday 22 June 2018 09:15:37 -0400 (0:00:00.044) 0:02:30.928 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Friday 22 June 2018 09:15:37 -0400 (0:00:00.042) 0:02:30.971 *********** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.580409\", \"end\": \"2018-06-22 13:15:38.322819\", \"rc\": 0, \"start\": \"2018-06-22 13:15:37.742410\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Friday 22 June 2018 09:15:38 -0400 (0:00:01.119) 0:02:32.091 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Friday 22 June 2018 09:15:38 -0400 (0:00:00.073) 0:02:32.164 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Friday 22 June 2018 09:15:38 -0400 (0:00:00.046) 0:02:32.210 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Friday 22 June 2018 09:15:38 -0400 (0:00:00.047) 0:02:32.258 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Friday 22 June 2018 09:15:38 -0400 (0:00:00.077) 0:02:32.336 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Friday 22 June 2018 09:15:38 -0400 (0:00:00.044) 0:02:32.381 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Friday 22 June 2018 09:15:38 -0400 (0:00:00.048) 0:02:32.429 *********** ", "changed: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Friday 22 June 2018 09:15:40 -0400 (0:00:02.211) 0:02:34.641 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Friday 22 June 2018 09:15:40 -0400 (0:00:00.043) 0:02:34.684 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Friday 22 June 2018 09:15:40 -0400 (0:00:00.044) 0:02:34.728 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Friday 22 June 2018 09:15:41 -0400 (0:00:00.054) 0:02:34.782 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Friday 22 June 2018 09:15:41 -0400 (0:00:00.044) 0:02:34.827 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Friday 22 June 2018 09:15:41 -0400 (0:00:00.038) 0:02:34.866 *********** ", "changed: [ceph-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Friday 22 June 2018 09:15:41 -0400 (0:00:00.476) 0:02:35.342 *********** ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for ceph-0", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"d45396dce38fd1819887516b5af41173fc14e408\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"5268a9201371c7a177ada3f251f5af2d\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 871, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673341.62-105295216168497/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Friday 22 June 2018 09:15:44 -0400 (0:00:03.084) 0:02:38.427 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure public_network configured] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:2", "Friday 22 June 2018 09:15:44 -0400 (0:00:00.043) 0:02:38.471 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure cluster_network configured] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:8", "Friday 22 June 2018 09:15:44 -0400 (0:00:00.039) 0:02:38.510 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure journal_size configured] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:15", "Friday 22 June 2018 09:15:44 -0400 (0:00:00.041) 0:02:38.552 *********** ", "ok: [ceph-0] => {", " \"msg\": \"WARNING: journal_size is configured to 512, which is less than 5GB. This is not recommended and can lead to severe issues.\"", "}", "", "TASK [ceph-osd : make sure an osd scenario was chosen] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:23", "Friday 22 June 2018 09:15:44 -0400 (0:00:00.072) 0:02:38.625 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure a valid osd scenario was chosen] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:31", "Friday 22 June 2018 09:15:44 -0400 (0:00:00.044) 0:02:38.669 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify devices have been provided] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:39", "Friday 22 June 2018 09:15:44 -0400 (0:00:00.044) 0:02:38.714 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : check if osd_scenario lvm is supported by the selected ceph version] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:49", "Friday 22 June 2018 09:15:44 -0400 (0:00:00.050) 0:02:38.764 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify lvm_volumes have been provided] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:59", "Friday 22 June 2018 09:15:45 -0400 (0:00:00.044) 0:02:38.809 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the lvm_volumes variable is a list] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:69", "Friday 22 June 2018 09:15:45 -0400 (0:00:00.044) 0:02:38.853 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the devices variable is a list] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:79", "Friday 22 June 2018 09:15:45 -0400 (0:00:00.048) 0:02:38.901 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify dedicated devices have been provided] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:88", "Friday 22 June 2018 09:15:45 -0400 (0:00:00.047) 0:02:38.949 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the dedicated_devices variable is a list] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:98", "Friday 22 June 2018 09:15:45 -0400 (0:00:00.042) 0:02:38.991 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : check if bluestore is supported by the selected ceph version] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:109", "Friday 22 June 2018 09:15:45 -0400 (0:00:00.042) 0:02:39.034 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include system_tuning.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:5", "Friday 22 June 2018 09:15:45 -0400 (0:00:00.049) 0:02:39.084 *********** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml for ceph-0", "", "TASK [ceph-osd : disable osd directory parsing by updatedb] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:2", "Friday 22 June 2018 09:15:45 -0400 (0:00:00.068) 0:02:39.152 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : disable osd directory path in updatedb.conf] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:11", "Friday 22 June 2018 09:15:45 -0400 (0:00:00.038) 0:02:39.191 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : create tmpfiles.d directory] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:22", "Friday 22 June 2018 09:15:45 -0400 (0:00:00.039) 0:02:39.231 *********** ", "ok: [ceph-0] => {\"changed\": false, \"gid\": 0, \"group\": \"root\", \"mode\": \"0755\", \"owner\": \"root\", \"path\": \"/etc/tmpfiles.d\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 0}", "", "TASK [ceph-osd : disable transparent hugepage] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:33", "Friday 22 June 2018 09:15:45 -0400 (0:00:00.475) 0:02:39.706 *********** ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"e000059a4cfd8ce350b13f14305a46eaf99849ba\", \"dest\": \"/etc/tmpfiles.d/ceph_transparent_hugepage.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"21ac872f3aa1fb44b01d4f7ab00a35fc\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 158, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673345.97-243307488122427/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : get default vm.min_free_kbytes] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:45", "Friday 22 June 2018 09:15:48 -0400 (0:00:02.376) 0:02:42.083 *********** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"sysctl\", \"-b\", \"vm.min_free_kbytes\"], \"delta\": \"0:00:00.003596\", \"end\": \"2018-06-22 13:15:48.800700\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:15:48.797104\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"67584\", \"stdout_lines\": [\"67584\"]}", "", "TASK [ceph-osd : set_fact vm_min_free_kbytes] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:52", "Friday 22 June 2018 09:15:48 -0400 (0:00:00.470) 0:02:42.554 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"vm_min_free_kbytes\": \"67584\"}, \"changed\": false}", "", "TASK [ceph-osd : apply operating system tuning] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:56", "Friday 22 June 2018 09:15:48 -0400 (0:00:00.062) 0:02:42.616 *********** ", "changed: [ceph-0] => (item={u'enable': u\"(osd_objectstore == 'bluestore')\", u'name': u'fs.aio-max-nr', u'value': u'1048576'}) => {\"changed\": true, \"item\": {\"enable\": \"(osd_objectstore == 'bluestore')\", \"name\": \"fs.aio-max-nr\", \"value\": \"1048576\"}}", "changed: [ceph-0] => (item={u'name': u'fs.file-max', u'value': 26234859}) => {\"changed\": true, \"item\": {\"name\": \"fs.file-max\", \"value\": 26234859}}", "changed: [ceph-0] => (item={u'name': u'vm.zone_reclaim_mode', u'value': 0}) => {\"changed\": true, \"item\": {\"name\": \"vm.zone_reclaim_mode\", \"value\": 0}}", "changed: [ceph-0] => (item={u'name': u'vm.swappiness', u'value': 10}) => {\"changed\": true, \"item\": {\"name\": \"vm.swappiness\", \"value\": 10}}", "changed: [ceph-0] => (item={u'name': u'vm.min_free_kbytes', u'value': u'67584'}) => {\"changed\": true, \"item\": {\"name\": \"vm.min_free_kbytes\", \"value\": \"67584\"}}", "", "TASK [ceph-osd : install dependencies] *****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:10", "Friday 22 June 2018 09:15:51 -0400 (0:00:02.420) 0:02:45.037 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include common.yml] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:18", "Friday 22 June 2018 09:15:51 -0400 (0:00:00.038) 0:02:45.075 *********** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml for ceph-0", "", "TASK [ceph-osd : create bootstrap-osd and osd directories] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:2", "Friday 22 June 2018 09:15:51 -0400 (0:00:00.063) 0:02:45.139 *********** ", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "ok: [ceph-0] => (item=/var/lib/ceph/osd/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-osd : copy ceph key(s) if needed] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:15", "Friday 22 June 2018 09:15:52 -0400 (0:00:00.886) 0:02:46.026 *********** ", "changed: [ceph-0] => (item={u'name': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"d8a7f9eb9d9dc0395da75fc7759797ea97e335aa\", \"dest\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"name\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\"}, \"md5sum\": \"5208039d17edb4ccda0d9023c061854b\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 113, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673352.3-30870827595785/source\", \"state\": \"file\", \"uid\": 167}", "skipping: [ceph-0] => (item={u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:2", "Friday 22 June 2018 09:15:54 -0400 (0:00:02.283) 0:02:48.309 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options 'ceph_disk_cli_options'] *******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:11", "Friday 22 June 2018 09:15:54 -0400 (0:00:00.038) 0:02:48.348 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph'] **************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:20", "Friday 22 June 2018 09:15:54 -0400 (0:00:00.049) 0:02:48.397 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore --dmcrypt'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:29", "Friday 22 June 2018 09:15:54 -0400 (0:00:00.048) 0:02:48.446 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --filestore --dmcrypt'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:38", "Friday 22 June 2018 09:15:54 -0400 (0:00:00.044) 0:02:48.491 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --dmcrypt'] ****", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:47", "Friday 22 June 2018 09:15:54 -0400 (0:00:00.045) 0:02:48.537 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e KV_TYPE=etcd -e KV_IP=127.0.0.1 -e KV_PORT=2379'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:56", "Friday 22 June 2018 09:15:54 -0400 (0:00:00.042) 0:02:48.579 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:62", "Friday 22 June 2018 09:15:54 -0400 (0:00:00.039) 0:02:48.619 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"docker_env_args\": \"-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0\"}, \"changed\": false}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=1'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:70", "Friday 22 June 2018 09:15:54 -0400 (0:00:00.069) 0:02:48.688 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:78", "Friday 22 June 2018 09:15:54 -0400 (0:00:00.044) 0:02:48.732 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=1'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:86", "Friday 22 June 2018 09:15:55 -0400 (0:00:00.048) 0:02:48.781 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact devices generate device list when osd_auto_discovery] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:2", "Friday 22 June 2018 09:15:55 -0400 (0:00:00.041) 0:02:48.822 *********** ", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sectors': u'41943040', u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {u'vda1': {u'sectorsize': 512, u'uuid': u'2018-06-20-11-57-19-00', u'links': {u'masters': [], u'labels': [u'config-2'], u'ids': [], u'uuids': [u'2018-06-20-11-57-19-00']}, u'sectors': u'2048', u'start': u'2048', u'holders': [], u'size': u'1.00 MB'}, u'vda2': {u'sectorsize': 512, u'uuid': u'fca00eb7-6dba-4ea0-b1e5-202b819f2b85', u'links': {u'masters': [], u'labels': [u'img-rootfs'], u'ids': [], u'uuids': [u'fca00eb7-6dba-4ea0-b1e5-202b819f2b85']}, u'sectors': u'41938911', u'start': u'4096', u'holders': [], u'size': u'20.00 GB'}}, u'holders': [], u'size': u'20.00 GB'}, 'key': u'vda'}) => {\"changed\": false, \"item\": {\"key\": \"vda\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {\"vda1\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"config-2\"], \"masters\": [], \"uuids\": [\"2018-06-20-11-57-19-00\"]}, \"sectors\": \"2048\", \"sectorsize\": 512, \"size\": \"1.00 MB\", \"start\": \"2048\", \"uuid\": \"2018-06-20-11-57-19-00\"}, \"vda2\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"img-rootfs\"], \"masters\": [], \"uuids\": [\"fca00eb7-6dba-4ea0-b1e5-202b819f2b85\"]}, \"sectors\": \"41938911\", \"sectorsize\": 512, \"size\": \"20.00 GB\", \"start\": \"4096\", \"uuid\": \"fca00eb7-6dba-4ea0-b1e5-202b819f2b85\"}}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"41943040\", \"sectorsize\": \"512\", \"size\": \"20.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sectors': u'83886080', u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'40.00 GB'}, 'key': u'vdb'}) => {\"changed\": false, \"item\": {\"key\": \"vdb\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"83886080\", \"sectorsize\": \"512\", \"size\": \"40.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : resolve dedicated device link(s)] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:15", "Friday 22 June 2018 09:15:55 -0400 (0:00:00.059) 0:02:48.881 *********** ", "", "TASK [ceph-osd : set_fact build dedicated_devices from resolved symlinks] ******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:24", "Friday 22 June 2018 09:15:55 -0400 (0:00:00.046) 0:02:48.927 *********** ", "", "TASK [ceph-osd : set_fact build final dedicated_devices list] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:32", "Friday 22 June 2018 09:15:55 -0400 (0:00:00.045) 0:02:48.973 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : read information about the devices] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:29", "Friday 22 June 2018 09:15:55 -0400 (0:00:00.039) 0:02:49.013 *********** ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 40960.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "", "TASK [ceph-osd : check the partition status of the osd disks] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:2", "Friday 22 June 2018 09:15:55 -0400 (0:00:00.722) 0:02:49.736 *********** ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:01.009297\", \"end\": \"2018-06-22 13:15:57.578932\", \"failed_when_result\": false, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:15:56.569635\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : create gpt disk label] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:11", "Friday 22 June 2018 09:15:57 -0400 (0:00:01.602) 0:02:51.338 *********** ", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdb'], u'end': u'2018-06-22 13:15:57.578932', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdb', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'start': u'2018-06-22 13:15:56.569635', u'delta': u'0:00:01.009297', 'item': u'/dev/vdb', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u'', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdb']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdb\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.012183\", \"end\": \"2018-06-22 13:15:58.183577\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:01.009297\", \"end\": \"2018-06-22 13:15:57.578932\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:15:56.569635\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-06-22 13:15:58.171394\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : include scenarios/collocated.yml] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:41", "Friday 22 June 2018 09:15:58 -0400 (0:00:00.607) 0:02:51.946 *********** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml for ceph-0", "", "TASK [ceph-osd : prepare ceph containerized osd disk collocated] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:5", "Friday 22 June 2018 09:15:58 -0400 (0:00:00.083) 0:02:52.030 *********** ", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, u'script': u\"unit 'MiB' print\", '_ansible_item_result': True, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 40960.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdb -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdb -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-6\", \"delta\": \"0:00:06.963994\", \"end\": \"2018-06-22 13:16:05.816535\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 40960.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-06-22 13:15:58.852541\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-06-22 13:15:59'\\n+common_functions.sh:13: log(): echo '2018-06-22 13:15:59 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid e97f941b-4aee-4d8d-9905-035cecb14b1e /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:e97f941b-4aee-4d8d-9905-035cecb14b1e --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/e97f941b-4aee-4d8d-9905-035cecb14b1e\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/e97f941b-4aee-4d8d-9905-035cecb14b1e\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdb\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:6f1cf919-f6ce-4f28-9ff2-a2010186b52e --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdb1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\\nmount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.tj5UdE with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.tj5UdE\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.tj5UdE\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.tj5UdE\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/ceph_fsid.30599.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/ceph_fsid.30599.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/fsid.30599.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/fsid.30599.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/magic.30599.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/magic.30599.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/journal_uuid.30599.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/journal_uuid.30599.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.tj5UdE/journal -> /dev/disk/by-partuuid/e97f941b-4aee-4d8d-9905-035cecb14b1e\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/type.30599.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/type.30599.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.tj5UdE\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.tj5UdE\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-06-22 13:15:59'\", \"+common_functions.sh:13: log(): echo '2018-06-22 13:15:59 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid e97f941b-4aee-4d8d-9905-035cecb14b1e /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:e97f941b-4aee-4d8d-9905-035cecb14b1e --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/e97f941b-4aee-4d8d-9905-035cecb14b1e\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/e97f941b-4aee-4d8d-9905-035cecb14b1e\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdb\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:6f1cf919-f6ce-4f28-9ff2-a2010186b52e --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdb1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\", \"mount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.tj5UdE with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.tj5UdE\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.tj5UdE\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.tj5UdE\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/ceph_fsid.30599.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/ceph_fsid.30599.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/fsid.30599.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/fsid.30599.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/magic.30599.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/magic.30599.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/journal_uuid.30599.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/journal_uuid.30599.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.tj5UdE/journal -> /dev/disk/by-partuuid/e97f941b-4aee-4d8d-9905-035cecb14b1e\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE/type.30599.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE/type.30599.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.tj5UdE\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.tj5UdE\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.tj5UdE\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.tj5UdE\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-06-22 13:15:59 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-06-22 13:15:59 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-06-22 13:15:59 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-06-22 13:15:59 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdb\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.lBMnxJz07c' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\\n2018-06-22 13:15:59 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdb1 isize=2048 agcount=4, agsize=2588607 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=10354427, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=5055, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdb2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdb1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-06-22 13:15:59 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-06-22 13:15:59 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-06-22 13:15:59 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-06-22 13:15:59 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdb\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.lBMnxJz07c' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\", \"2018-06-22 13:15:59 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdb1 isize=2048 agcount=4, agsize=2588607 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=10354427, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=5055, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdb2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdb1' from root:disk to ceph:ceph\"]}", "", "TASK [ceph-osd : automatic prepare ceph containerized osd disk collocated] *****", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:30", "Friday 22 June 2018 09:16:05 -0400 (0:00:07.548) 0:02:59.578 *********** ", "skipping: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"item\": \"/dev/vdb\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : manually prepare ceph \"filestore\" non-containerized osd disk(s) with collocated osd data and journal] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:53", "Friday 22 June 2018 09:16:05 -0400 (0:00:00.046) 0:02:59.625 *********** ", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, u'script': u\"unit 'MiB' print\", '_ansible_item_result': True, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 40960.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 40960.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include scenarios/non-collocated.yml] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:48", "Friday 22 June 2018 09:16:05 -0400 (0:00:00.053) 0:02:59.679 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include scenarios/lvm.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:56", "Friday 22 June 2018 09:16:05 -0400 (0:00:00.042) 0:02:59.721 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include activate_osds.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:64", "Friday 22 June 2018 09:16:05 -0400 (0:00:00.037) 0:02:59.759 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include start_osds.yml] ***************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:72", "Friday 22 June 2018 09:16:06 -0400 (0:00:00.043) 0:02:59.802 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include docker/main.yml] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:80", "Friday 22 June 2018 09:16:06 -0400 (0:00:00.040) 0:02:59.843 *********** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml for ceph-0", "", "TASK [ceph-osd : include start_docker_osd.yml] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml:2", "Friday 22 June 2018 09:16:06 -0400 (0:00:00.080) 0:02:59.924 *********** ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml for ceph-0", "", "TASK [ceph-osd : umount ceph disk (if on openstack)] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:4", "Friday 22 June 2018 09:16:06 -0400 (0:00:00.063) 0:02:59.987 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : test if the container image has the disk_list function] *******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:13", "Friday 22 June 2018 09:16:06 -0400 (0:00:00.038) 0:03:00.025 *********** ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint=stat\", \"192.168.24.1:8787/rhceph:3-6\", \"disk_list.sh\"], \"delta\": \"0:00:00.429719\", \"end\": \"2018-06-22 13:16:07.199214\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:16:06.769495\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: 'disk_list.sh'\\n Size: 3726 \\tBlocks: 8 IO Block: 4096 regular file\\nDevice: 2ah/42d\\tInode: 46189889 Links: 1\\nAccess: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\\nAccess: 2018-04-18 13:02:03.000000000 +0000\\nModify: 2018-04-18 13:02:03.000000000 +0000\\nChange: 2018-06-22 13:15:25.135445874 +0000\\n Birth: -\", \"stdout_lines\": [\" File: 'disk_list.sh'\", \" Size: 3726 \\tBlocks: 8 IO Block: 4096 regular file\", \"Device: 2ah/42d\\tInode: 46189889 Links: 1\", \"Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\", \"Access: 2018-04-18 13:02:03.000000000 +0000\", \"Modify: 2018-04-18 13:02:03.000000000 +0000\", \"Change: 2018-06-22 13:15:25.135445874 +0000\", \" Birth: -\"]}", "", "TASK [ceph-osd : generate ceph osd docker run script] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:19", "Friday 22 June 2018 09:16:07 -0400 (0:00:00.934) 0:03:00.960 *********** ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"6e2ae7f97fe861dbe9824133e6c912df4b7c8959\", \"dest\": \"/usr/share/ceph-osd-run.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"97ef03a63aca5a84f85a7a061ad42a61\", \"mode\": \"0744\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:usr_t:s0\", \"size\": 1000, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673367.23-199416710417990/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:28", "Friday 22 June 2018 09:16:09 -0400 (0:00:02.412) 0:03:03.372 *********** ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"b7abfb86a4af8d6e54d349965cae96bf9b995c49\", \"dest\": \"/etc/systemd/system/ceph-osd@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"8a53f95e6590750e7c4807589dd5864c\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 496, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673369.64-214659588146178/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : systemd start osd container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:39", "Friday 22 June 2018 09:16:12 -0400 (0:00:02.624) 0:03:05.997 *********** ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"enabled\": true, \"item\": \"/dev/vdb\", \"name\": \"ceph-osd@vdb\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"docker.service basic.target systemd-journald.socket system-ceph\\\\x5cx2dosd.slice\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdb.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"14904\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"14904\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdb.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-osd : set_fact openstack_keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:87", "Friday 22 June 2018 09:16:12 -0400 (0:00:00.728) 0:03:06.725 *********** ", "ok: [ceph-0] => (item={u'mon_cap': u'allow r', u'name': u'client.openstack', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', u'osd_cap': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'}) => {\"ansible_facts\": {\"openstack_keys_tmp\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}]}, \"changed\": false, \"item\": {\"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r\", \"name\": \"client.openstack\", \"osd_cap\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}}", "ok: [ceph-0] => (item={u'mon_cap': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', u'mds_cap': u'allow *', u'name': u'client.manila', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', u'osd_cap': u'allow rw'}) => {\"ansible_facts\": {\"openstack_keys_tmp\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}]}, \"changed\": false, \"item\": {\"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mds_cap\": \"allow *\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"name\": \"client.manila\", \"osd_cap\": \"allow rw\"}}", "ok: [ceph-0] => (item={u'mon_cap': u'allow rw', u'name': u'client.radosgw', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', u'osd_cap': u'allow rwx'}) => {\"ansible_facts\": {\"openstack_keys_tmp\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false, \"item\": {\"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow rw\", \"name\": \"client.radosgw\", \"osd_cap\": \"allow rwx\"}}", "", "TASK [ceph-osd : set_fact keys - override keys_tmp with keys] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:95", "Friday 22 June 2018 09:16:13 -0400 (0:00:00.108) 0:03:06.834 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"openstack_keys\": [{\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false}", "", "TASK [ceph-osd : wait for all osd to be up] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:2", "Friday 22 June 2018 09:16:13 -0400 (0:00:00.099) 0:03:06.933 *********** ", "changed: [ceph-0 -> 192.168.24.8] => {\"attempts\": 1, \"changed\": true, \"cmd\": \"test \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_osds\\\"])')\\\" = \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_up_osds\\\"])')\\\"\", \"delta\": \"0:00:00.761118\", \"end\": \"2018-06-22 13:16:14.540851\", \"rc\": 0, \"start\": \"2018-06-22 13:16:13.779733\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : list existing pool(s)] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:12", "Friday 22 June 2018 09:16:14 -0400 (0:00:01.411) 0:03:08.345 *********** ", "changed: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.386280\", \"end\": \"2018-06-22 13:16:15.541877\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:15.155597\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.8] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.371764\", \"end\": \"2018-06-22 13:16:16.417987\", \"failed_when_result\": false, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:16.046223\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.351274\", \"end\": \"2018-06-22 13:16:17.240806\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:16.889532\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.326659\", \"end\": \"2018-06-22 13:16:18.040070\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:17.713411\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.324626\", \"end\": \"2018-06-22 13:16:18.851610\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:18.526984\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : create openstack pool(s)] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:21", "Friday 22 June 2018 09:16:18 -0400 (0:00:04.310) 0:03:12.655 *********** ", "ok: [ceph-0 -> 192.168.24.8] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'images'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'images', u'size'], u'end': u'2018-06-22 13:16:15.541877', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-22 13:16:15.155597', u'delta': u'0:00:00.386280', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'images'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"images\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.933696\", \"end\": \"2018-06-22 13:16:20.396904\", \"item\": [{\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.8\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.386280\", \"end\": \"2018-06-22 13:16:15.541877\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get images size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:15.155597\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-22 13:16:19.463208\", \"stderr\": \"pool 'images' created\", \"stderr_lines\": [\"pool 'images' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.8] => (item=[{u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'metrics'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'metrics', u'size'], u'end': u'2018-06-22 13:16:16.417987', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-22 13:16:16.046223', u'delta': u'0:00:00.371764', 'item': {u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'metrics'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"metrics\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.893886\", \"end\": \"2018-06-22 13:16:21.887666\", \"item\": [{\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.8\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.371764\", \"end\": \"2018-06-22 13:16:16.417987\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:16.046223\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-22 13:16:20.993780\", \"stderr\": \"pool 'metrics' created\", \"stderr_lines\": [\"pool 'metrics' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.8] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'backups'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'backups', u'size'], u'end': u'2018-06-22 13:16:17.240806', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-22 13:16:16.889532', u'delta': u'0:00:00.351274', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'backups'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"backups\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.891178\", \"end\": \"2018-06-22 13:16:23.269395\", \"item\": [{\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.8\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.351274\", \"end\": \"2018-06-22 13:16:17.240806\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get backups size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:16.889532\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-22 13:16:22.378217\", \"stderr\": \"pool 'backups' created\", \"stderr_lines\": [\"pool 'backups' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.8] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'vms'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'vms', u'size'], u'end': u'2018-06-22 13:16:18.040070', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-22 13:16:17.713411', u'delta': u'0:00:00.326659', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'vms'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"vms\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:00.912876\", \"end\": \"2018-06-22 13:16:24.668246\", \"item\": [{\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.8\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.326659\", \"end\": \"2018-06-22 13:16:18.040070\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get vms size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:17.713411\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-22 13:16:23.755370\", \"stderr\": \"pool 'vms' created\", \"stderr_lines\": [\"pool 'vms' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.8] => (item=[{u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}, {'_ansible_parsed': True, 'stderr_lines': [u\"Error ENOENT: unrecognized pool 'volumes'\"], u'cmd': [u'docker', u'exec', u'ceph-mon-controller-0', u'ceph', u'--cluster', u'ceph', u'osd', u'pool', u'get', u'volumes', u'size'], u'end': u'2018-06-22 13:16:18.851610', '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_item_result': True, u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout': u'', u'start': u'2018-06-22 13:16:18.526984', u'delta': u'0:00:00.324626', 'item': {u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}, u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'stderr': u\"Error ENOENT: unrecognized pool 'volumes'\", '_ansible_ignore_errors': None, u'failed': False}]) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"create\", \"volumes\", \"32\", \"32\", \"replicated_rule\", \"1\"], \"delta\": \"0:00:01.051271\", \"end\": \"2018-06-22 13:16:26.212069\", \"item\": [{\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"controller-0\", \"ansible_host\": \"192.168.24.8\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.324626\", \"end\": \"2018-06-22 13:16:18.851610\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"docker exec ceph-mon-controller-0 ceph --cluster ceph osd pool get volumes size\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-06-22 13:16:18.526984\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}], \"rc\": 0, \"start\": \"2018-06-22 13:16:25.160798\", \"stderr\": \"pool 'volumes' created\", \"stderr_lines\": [\"pool 'volumes' created\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : assign application to pool(s)] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:41", "Friday 22 June 2018 09:16:26 -0400 (0:00:07.355) 0:03:20.011 *********** ", "ok: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"images\", \"rbd\"], \"delta\": \"0:00:01.321638\", \"end\": \"2018-06-22 13:16:28.239970\", \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:26.918332\", \"stderr\": \"enabled application 'rbd' on pool 'images'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.8] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"metrics\", \"openstack_gnocchi\"], \"delta\": \"0:00:00.500731\", \"end\": \"2018-06-22 13:16:29.211350\", \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:28.710619\", \"stderr\": \"enabled application 'openstack_gnocchi' on pool 'metrics'\", \"stderr_lines\": [\"enabled application 'openstack_gnocchi' on pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"backups\", \"rbd\"], \"delta\": \"0:00:00.528652\", \"end\": \"2018-06-22 13:16:30.205816\", \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:29.677164\", \"stderr\": \"enabled application 'rbd' on pool 'backups'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"vms\", \"rbd\"], \"delta\": \"0:00:00.541306\", \"end\": \"2018-06-22 13:16:31.225138\", \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:30.683832\", \"stderr\": \"enabled application 'rbd' on pool 'vms'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0 -> 192.168.24.8] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u''}) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"application\", \"enable\", \"volumes\", \"rbd\"], \"delta\": \"0:00:00.540333\", \"end\": \"2018-06-22 13:16:32.252575\", \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:31.712242\", \"stderr\": \"enabled application 'rbd' on pool 'volumes'\", \"stderr_lines\": [\"enabled application 'rbd' on pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : create openstack cephx key(s)] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:50", "Friday 22 June 2018 09:16:32 -0400 (0:00:06.038) 0:03:26.049 *********** ", "changed: [ceph-0 -> 192.168.24.8] => (item={'caps': {'mds': u'', 'osd': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', 'mon': u'allow r', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', 'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.openstack.keyring\"], \"delta\": \"0:00:00.835956\", \"end\": \"2018-06-22 13:16:33.888266\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:33.052310\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.8] => (item={'caps': {'mds': u'allow *', 'osd': u'allow rw', 'mon': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', 'mgr': u'allow *'}, 'name': u'client.manila', 'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', 'mode': u'0600'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.manila.keyring\"], \"delta\": \"0:00:00.773456\", \"end\": \"2018-06-22 13:16:35.134056\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:34.360600\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.8] => (item={'caps': {'mds': u'', 'osd': u'allow rwx', 'mon': u'allow rw', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', 'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"import\", \"-i\", \"/etc/ceph//ceph.client.radosgw.keyring\"], \"delta\": \"0:00:00.759743\", \"end\": \"2018-06-22 13:16:36.365983\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-06-22 13:16:35.606240\", \"stderr\": \"imported keyring\", \"stderr_lines\": [\"imported keyring\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : fetch openstack cephx key(s)] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:63", "Friday 22 June 2018 09:16:36 -0400 (0:00:04.104) 0:03:30.154 *********** ", "changed: [ceph-0 -> 192.168.24.8] => (item={'caps': {'mds': u'', 'osd': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', 'mon': u'allow r', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', 'name': u'client.openstack'}) => {\"changed\": true, \"checksum\": \"e8b2bdc53999aaa7ddcfb199e3722cc6d2ddde91\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.client.openstack.keyring\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"md5sum\": \"566356fccefb4488e70a2e9e03c00c1e\", \"remote_checksum\": \"e8b2bdc53999aaa7ddcfb199e3722cc6d2ddde91\", \"remote_md5sum\": null}", "changed: [ceph-0 -> 192.168.24.8] => (item={'caps': {'mds': u'allow *', 'osd': u'allow rw', 'mon': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', 'mgr': u'allow *'}, 'name': u'client.manila', 'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', 'mode': u'0600'}) => {\"changed\": true, \"checksum\": \"f4862790452df4e779b0fe4b180c86014cb1da5d\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.client.manila.keyring\", \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"md5sum\": \"6cdea25af14b36920e2bf08f8511bc2a\", \"remote_checksum\": \"f4862790452df4e779b0fe4b180c86014cb1da5d\", \"remote_md5sum\": null}", "changed: [ceph-0 -> 192.168.24.8] => (item={'caps': {'mds': u'', 'osd': u'allow rwx', 'mon': u'allow rw', 'mgr': u'allow *'}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', 'name': u'client.radosgw'}) => {\"changed\": true, \"checksum\": \"cd5b07c38b4be9fb966b57a01d3c261899cb78ca\", \"dest\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/53912472-747b-11e8-95a3-5254003d7dcb/etc/ceph/ceph.client.radosgw.keyring\", \"item\": {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"md5sum\": \"25d9851a517ff9a4c090a62ec2a3cc5c\", \"remote_checksum\": \"cd5b07c38b4be9fb966b57a01d3c261899cb78ca\", \"remote_md5sum\": null}", "", "TASK [ceph-osd : copy to other mons the openstack cephx key(s)] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:71", "Friday 22 June 2018 09:16:37 -0400 (0:00:01.490) 0:03:31.644 *********** ", "changed: [ceph-0 -> 192.168.24.8] => (item=[u'controller-0', {'name': u'client.openstack', 'mode': u'0600', 'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', 'caps': {'mds': u'', 'osd': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', 'mgr': u'allow *', 'mon': u'allow r'}}]) => {\"changed\": true, \"checksum\": \"e8b2bdc53999aaa7ddcfb199e3722cc6d2ddde91\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow r\", \"osd\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.openstack.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 299, \"state\": \"file\", \"uid\": 167}", "changed: [ceph-0 -> 192.168.24.8] => (item=[u'controller-0', {'mode': u'0600', 'name': u'client.manila', 'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', 'caps': {'mds': u'allow *', 'osd': u'allow rw', 'mgr': u'allow *', 'mon': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"'}}]) => {\"changed\": true, \"checksum\": \"f4862790452df4e779b0fe4b180c86014cb1da5d\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"osd\": \"allow rw\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.manila.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 276, \"state\": \"file\", \"uid\": 167}", "changed: [ceph-0 -> 192.168.24.8] => (item=[u'controller-0', {'name': u'client.radosgw', 'mode': u'0600', 'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', 'caps': {'mds': u'', 'osd': u'allow rwx', 'mgr': u'allow *', 'mon': u'allow rw'}}]) => {\"changed\": true, \"checksum\": \"cd5b07c38b4be9fb966b57a01d3c261899cb78ca\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": [\"controller-0\", {\"caps\": {\"mds\": \"\", \"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}], \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 149, \"state\": \"file\", \"uid\": 167}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Friday 22 June 2018 09:16:43 -0400 (0:00:05.407) 0:03:37.052 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Friday 22 June 2018 09:16:43 -0400 (0:00:00.059) 0:03:37.112 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Friday 22 June 2018 09:16:43 -0400 (0:00:00.037) 0:03:37.149 *********** ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Friday 22 June 2018 09:16:43 -0400 (0:00:00.070) 0:03:37.220 *********** ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Friday 22 June 2018 09:16:43 -0400 (0:00:00.067) 0:03:37.288 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Friday 22 June 2018 09:16:43 -0400 (0:00:00.057) 0:03:37.346 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Friday 22 June 2018 09:16:43 -0400 (0:00:00.058) 0:03:37.404 *********** ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"9a770971b362c519fc75c5228fc22dd8d4cc68aa\", \"dest\": \"/tmp/restart_osd_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"c42d82e9b9c002f16b40c524607c38ea\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_home_t:s0\", \"size\": 3060, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673403.7-133197522237299/source\", \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Friday 22 June 2018 09:16:45 -0400 (0:00:02.348) 0:03:39.753 *********** ", "skipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.070) 0:03:39.824 *********** ", "skipping: [ceph-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.075) 0:03:39.899 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.064) 0:03:39.964 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.063) 0:03:40.027 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.040) 0:03:40.068 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.046) 0:03:40.114 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.050) 0:03:40.164 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.059) 0:03:40.224 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.059) 0:03:40.283 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.036) 0:03:40.319 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.045) 0:03:40.365 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.048) 0:03:40.414 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.057) 0:03:40.471 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.060) 0:03:40.531 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.040) 0:03:40.572 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.046) 0:03:40.618 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.046) 0:03:40.664 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Friday 22 June 2018 09:16:46 -0400 (0:00:00.058) 0:03:40.723 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.061) 0:03:40.784 *********** ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.039) 0:03:40.824 *********** ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.073) 0:03:40.897 *********** ", "skipping: [ceph-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.073) 0:03:40.971 *********** ", "ok: [ceph-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [set ceph osd install 'Complete'] *****************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:156", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.081) 0:03:41.052 *********** ", "ok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"end\": \"20180622091647Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "PLAY [mdss] ********************************************************************", "skipping: no hosts matched", "", "PLAY [rgws] ********************************************************************", "skipping: no hosts matched", "", "PLAY [nfss] ********************************************************************", "skipping: no hosts matched", "", "PLAY [rbdmirrors] **************************************************************", "skipping: no hosts matched", "", "PLAY [restapis] ****************************************************************", "skipping: no hosts matched", "", "PLAY [clients] *****************************************************************", "", "TASK [set ceph client install 'In Progress'] ***********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:307", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.144) 0:03:41.196 *********** ", "ok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"start\": \"20180622091647Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.070) 0:03:41.267 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.041) 0:03:41.308 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.038) 0:03:41.347 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.048) 0:03:41.395 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.042) 0:03:41.438 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.040) 0:03:41.478 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.041) 0:03:41.519 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.039) 0:03:41.558 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.036) 0:03:41.595 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.043) 0:03:41.638 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.039) 0:03:41.678 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.040) 0:03:41.718 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Friday 22 June 2018 09:16:47 -0400 (0:00:00.039) 0:03:41.758 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.041) 0:03:41.800 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.042) 0:03:41.842 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.047) 0:03:41.889 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.042) 0:03:41.932 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.041) 0:03:41.973 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.040) 0:03:42.013 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.041) 0:03:42.055 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.040) 0:03:42.095 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.045) 0:03:42.141 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.039) 0:03:42.180 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.041) 0:03:42.221 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.039) 0:03:42.261 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.039) 0:03:42.300 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.037) 0:03:42.338 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.043) 0:03:42.381 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Friday 22 June 2018 09:16:48 -0400 (0:00:00.039) 0:03:42.421 *********** ", "ok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Friday 22 June 2018 09:16:49 -0400 (0:00:00.601) 0:03:43.023 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Friday 22 June 2018 09:16:49 -0400 (0:00:00.069) 0:03:43.092 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Friday 22 June 2018 09:16:49 -0400 (0:00:00.187) 0:03:43.280 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Friday 22 June 2018 09:16:49 -0400 (0:00:00.067) 0:03:43.347 *********** ", "ok: [compute-0 -> 192.168.24.8] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Friday 22 June 2018 09:16:49 -0400 (0:00:00.234) 0:03:43.581 *********** ", "ok: [compute-0 -> 192.168.24.8] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.358092\", \"end\": \"2018-06-22 13:16:50.792922\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:16:50.434830\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"53912472-747b-11e8-95a3-5254003d7dcb\", \"stdout_lines\": [\"53912472-747b-11e8-95a3-5254003d7dcb\"]}", "", "TASK [ceph-defaults : check if /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Friday 22 June 2018 09:16:50 -0400 (0:00:00.977) 0:03:44.559 *********** ", "ok: [compute-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Friday 22 June 2018 09:16:50 -0400 (0:00:00.196) 0:03:44.755 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Friday 22 June 2018 09:16:51 -0400 (0:00:00.055) 0:03:44.810 *********** ", "ok: [compute-0 -> localhost] => {\"changed\": false, \"gid\": 985, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 80, \"state\": \"directory\", \"uid\": 988}", "", "TASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Friday 22 June 2018 09:16:51 -0400 (0:00:00.216) 0:03:45.027 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"fsid\": \"53912472-747b-11e8-95a3-5254003d7dcb\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Friday 22 June 2018 09:16:51 -0400 (0:00:00.076) 0:03:45.104 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85", "Friday 22 June 2018 09:16:51 -0400 (0:00:00.078) 0:03:45.183 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96", "Friday 22 June 2018 09:16:51 -0400 (0:00:00.045) 0:03:45.228 *********** ", "ok: [compute-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 53912472-747b-11e8-95a3-5254003d7dcb | tee /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105", "Friday 22 June 2018 09:16:51 -0400 (0:00:00.199) 0:03:45.428 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117", "Friday 22 June 2018 09:16:51 -0400 (0:00:00.044) 0:03:45.472 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123", "Friday 22 June 2018 09:16:51 -0400 (0:00:00.046) 0:03:45.519 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"mds_name\": \"compute-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129", "Friday 22 June 2018 09:16:51 -0400 (0:00:00.072) 0:03:45.591 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135", "Friday 22 June 2018 09:16:51 -0400 (0:00:00.040) 0:03:45.632 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Friday 22 June 2018 09:16:51 -0400 (0:00:00.073) 0:03:45.705 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Friday 22 June 2018 09:16:52 -0400 (0:00:00.071) 0:03:45.777 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Friday 22 June 2018 09:16:52 -0400 (0:00:00.072) 0:03:45.849 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166", "Friday 22 June 2018 09:16:52 -0400 (0:00:00.052) 0:03:45.902 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175", "Friday 22 June 2018 09:16:52 -0400 (0:00:00.047) 0:03:45.949 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183", "Friday 22 June 2018 09:16:52 -0400 (0:00:00.045) 0:03:45.995 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Friday 22 June 2018 09:16:52 -0400 (0:00:00.043) 0:03:46.038 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Friday 22 June 2018 09:16:52 -0400 (0:00:00.041) 0:03:46.080 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Friday 22 June 2018 09:16:52 -0400 (0:00:00.042) 0:03:46.122 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Friday 22 June 2018 09:16:52 -0400 (0:00:00.053) 0:03:46.176 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Friday 22 June 2018 09:16:52 -0400 (0:00:00.070) 0:03:46.246 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Friday 22 June 2018 09:16:52 -0400 (0:00:00.067) 0:03:46.314 *********** ", "changed: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [compute-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Friday 22 June 2018 09:16:57 -0400 (0:00:05.303) 0:03:51.617 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Friday 22 June 2018 09:16:57 -0400 (0:00:00.046) 0:03:51.664 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Friday 22 June 2018 09:16:57 -0400 (0:00:00.046) 0:03:51.710 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Friday 22 June 2018 09:16:57 -0400 (0:00:00.045) 0:03:51.756 *********** ", "ok: [compute-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [compute-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Friday 22 June 2018 09:16:58 -0400 (0:00:00.979) 0:03:52.735 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"monitor_name\": \"compute-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Friday 22 June 2018 09:16:59 -0400 (0:00:00.074) 0:03:52.810 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Friday 22 June 2018 09:16:59 -0400 (0:00:00.041) 0:03:52.852 *********** ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.026819\", \"end\": \"2018-06-22 13:16:59.633587\", \"rc\": 0, \"start\": \"2018-06-22 13:16:59.606768\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 94f4240/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 94f4240/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Friday 22 June 2018 09:16:59 -0400 (0:00:00.540) 0:03:53.392 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Friday 22 June 2018 09:16:59 -0400 (0:00:00.074) 0:03:53.467 *********** ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-compute-0\"], \"delta\": \"0:00:00.029369\", \"end\": \"2018-06-22 13:17:00.254322\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:17:00.224953\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.546) 0:03:54.014 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.049) 0:03:54.063 *********** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.054) 0:03:54.117 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.047) 0:03:54.164 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.051) 0:03:54.216 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.051) 0:03:54.268 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.057) 0:03:54.326 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.041) 0:03:54.368 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.042) 0:03:54.410 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.047) 0:03:54.458 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.047) 0:03:54.505 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.049) 0:03:54.555 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.054) 0:03:54.609 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.046) 0:03:54.656 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.044) 0:03:54.701 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Friday 22 June 2018 09:17:00 -0400 (0:00:00.044) 0:03:54.745 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:54.790 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:54.834 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.051) 0:03:54.886 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.046) 0:03:54.933 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.045) 0:03:54.978 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.045) 0:03:55.024 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.043) 0:03:55.068 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.113 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.055) 0:03:55.168 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.045) 0:03:55.213 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.047) 0:03:55.261 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.306 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.350 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.394 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.054) 0:03:55.449 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.494 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.538 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.583 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.044) 0:03:55.627 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Friday 22 June 2018 09:17:01 -0400 (0:00:00.045) 0:03:55.673 *********** ", "ok: [compute-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:15.739816\", \"end\": \"2018-06-22 13:17:18.254373\", \"rc\": 0, \"start\": \"2018-06-22 13:17:02.514557\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-6: Pulling from 192.168.24.1:8787/rhceph\\n9a32f102e677: Pulling fs layer\\nb8aa42cec17a: Pulling fs layer\\nf00cbf28d025: Pulling fs layer\\nb8aa42cec17a: Download complete\\n9a32f102e677: Download complete\\nf00cbf28d025: Verifying Checksum\\nf00cbf28d025: Download complete\\n9a32f102e677: Pull complete\\nb8aa42cec17a: Pull complete\\nf00cbf28d025: Pull complete\\nDigest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-6: Pulling from 192.168.24.1:8787/rhceph\", \"9a32f102e677: Pulling fs layer\", \"b8aa42cec17a: Pulling fs layer\", \"f00cbf28d025: Pulling fs layer\", \"b8aa42cec17a: Download complete\", \"9a32f102e677: Download complete\", \"f00cbf28d025: Verifying Checksum\", \"f00cbf28d025: Download complete\", \"9a32f102e677: Pull complete\", \"b8aa42cec17a: Pull complete\", \"f00cbf28d025: Pull complete\", \"Digest: sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-6\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-6 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Friday 22 June 2018 09:17:18 -0400 (0:00:16.347) 0:04:12.020 *********** ", "changed: [compute-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-6\"], \"delta\": \"0:00:00.030015\", \"end\": \"2018-06-22 13:17:18.913537\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-06-22 13:17:18.883522\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-6\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"9817222a9fd1\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\\n \\\"Volumes\\\": {\\n \\\"/etc/ceph\\\": {},\\n \\\"/etc/ganesha\\\": {},\\n \\\"/var/lib/ceph\\\": {}\\n },\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"master\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"master\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\\n \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"6\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\\n \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 732827275,\\n \\\"VirtualSize\\\": 732827275,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/0179656c641f4722d6f09053970bc22370490068858f90ad211fc530e928d6a2/diff:/var/lib/docker/overlay2/4a0f358bb31bae2256894d8f9b3d953b4779cb17b2cb2fdef512883ca71f0180/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/98a887e6aeda44e154c4448e9ea3811e5375e5e3e3237140a13770dd3a4a0ea0/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/98a887e6aeda44e154c4448e9ea3811e5375e5e3e3237140a13770dd3a4a0ea0/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/98a887e6aeda44e154c4448e9ea3811e5375e5e3e3237140a13770dd3a4a0ea0/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\\n \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\\n \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:9f92f1dc96eccd12eda1e809a3539e58f83faad6289a21beb1a6ebac05b91f42\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-6\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-04-18T13:13:30.317845Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z2.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:e8b064b6d59e5ae67703983d9bcadb3e48e4bad1443bd2d8ca86096ce6969ba9\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"9817222a9fd1\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"e0292b8001103cbd70a728aa73b8c602430c923944c4fcbaf5e62eda9e16530f\\\",\", \" \\\"Volumes\\\": {\", \" \\\"/etc/ceph\\\": {},\", \" \\\"/etc/ganesha\\\": {},\", \" \\\"/var/lib/ceph\\\": {}\", \" },\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"master\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"99f689cd2c12f8332924db6a0cc0463bb26631b0\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"master\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-04-18T13:01:58.678631\\\",\", \" \\\"com.redhat.build-host\\\": \\\"ip-10-29-120-145.ec2.internal\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-docker\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"6\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-6\\\",\", \" \\\"vcs-ref\\\": \\\"9fe91bb07dc2b866b3bd024bbaf43f09d4eb05e9\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 732827275,\", \" \\\"VirtualSize\\\": 732827275,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/0179656c641f4722d6f09053970bc22370490068858f90ad211fc530e928d6a2/diff:/var/lib/docker/overlay2/4a0f358bb31bae2256894d8f9b3d953b4779cb17b2cb2fdef512883ca71f0180/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/98a887e6aeda44e154c4448e9ea3811e5375e5e3e3237140a13770dd3a4a0ea0/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/98a887e6aeda44e154c4448e9ea3811e5375e5e3e3237140a13770dd3a4a0ea0/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/98a887e6aeda44e154c4448e9ea3811e5375e5e3e3237140a13770dd3a4a0ea0/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:e9fb3906049428130d8fc22e715dc6665306ebbf483290dd139be5d7457d9749\\\",\", \" \\\"sha256:1b0bb3f6ad7e8dbdc1d19cf782dc06227de1d95a5d075efb592196a509e6e3a9\\\",\", \" \\\"sha256:f0761cecd36be7f88de04a51a9c741d047c0ad7bbd4e2312e57f40e3f6a68447\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Friday 22 June 2018 09:17:18 -0400 (0:00:00.742) 0:04:12.763 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:c8f9642dc0d71f2957ea5bc9b5b689cb39cfd02321cab3aa244bfe2a9f9b9b8a\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Friday 22 June 2018 09:17:19 -0400 (0:00:00.076) 0:04:12.840 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Friday 22 June 2018 09:17:19 -0400 (0:00:00.044) 0:04:12.884 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Friday 22 June 2018 09:17:19 -0400 (0:00:00.044) 0:04:12.929 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Friday 22 June 2018 09:17:19 -0400 (0:00:00.045) 0:04:12.975 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Friday 22 June 2018 09:17:19 -0400 (0:00:00.049) 0:04:13.024 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Friday 22 June 2018 09:17:19 -0400 (0:00:00.044) 0:04:13.069 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Friday 22 June 2018 09:17:19 -0400 (0:00:00.043) 0:04:13.112 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Friday 22 June 2018 09:17:19 -0400 (0:00:00.045) 0:04:13.158 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Friday 22 June 2018 09:17:19 -0400 (0:00:00.047) 0:04:13.206 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Friday 22 June 2018 09:17:19 -0400 (0:00:00.043) 0:04:13.249 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Friday 22 June 2018 09:17:19 -0400 (0:00:00.043) 0:04:13.293 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Friday 22 June 2018 09:17:19 -0400 (0:00:00.051) 0:04:13.345 *********** ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-6\", \"--version\"], \"delta\": \"0:00:00.560640\", \"end\": \"2018-06-22 13:17:20.651789\", \"rc\": 0, \"start\": \"2018-06-22 13:17:20.091149\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-6.el7cp (78f60b924802e34d44f7078029a40dbe6c0c922f) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Friday 22 June 2018 09:17:20 -0400 (0:00:01.074) 0:04:14.419 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-6.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Friday 22 June 2018 09:17:20 -0400 (0:00:00.076) 0:04:14.495 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Friday 22 June 2018 09:17:20 -0400 (0:00:00.052) 0:04:14.548 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Friday 22 June 2018 09:17:20 -0400 (0:00:00.047) 0:04:14.595 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Friday 22 June 2018 09:17:20 -0400 (0:00:00.076) 0:04:14.671 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Friday 22 June 2018 09:17:20 -0400 (0:00:00.047) 0:04:14.719 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Friday 22 June 2018 09:17:20 -0400 (0:00:00.047) 0:04:14.767 *********** ", "changed: [compute-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [compute-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Friday 22 June 2018 09:17:23 -0400 (0:00:02.220) 0:04:16.987 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Friday 22 June 2018 09:17:23 -0400 (0:00:00.044) 0:04:17.032 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Friday 22 June 2018 09:17:23 -0400 (0:00:00.044) 0:04:17.076 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Friday 22 June 2018 09:17:23 -0400 (0:00:00.052) 0:04:17.129 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Friday 22 June 2018 09:17:23 -0400 (0:00:00.042) 0:04:17.172 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Friday 22 June 2018 09:17:23 -0400 (0:00:00.042) 0:04:17.214 *********** ", "changed: [compute-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Friday 22 June 2018 09:17:23 -0400 (0:00:00.512) 0:04:17.727 *********** ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for compute-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for compute-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for compute-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for compute-0", "changed: [compute-0] => {\"changed\": true, \"checksum\": \"eeef7a153f878e6b1077230106cfc6c53cc7d23e\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"7fe0e8e07ef9226787b767b021af3e3a\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 978, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673444.0-36547092737052/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Friday 22 June 2018 09:17:26 -0400 (0:00:03.019) 0:04:20.746 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-client : copy ceph admin keyring when non containerized deployment] ***", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml:2", "Friday 22 June 2018 09:17:27 -0400 (0:00:00.040) 0:04:20.787 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-client : set_fact keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:2", "Friday 22 June 2018 09:17:27 -0400 (0:00:00.047) 0:04:20.835 *********** ", "ok: [compute-0] => (item={u'mon_cap': u'allow r', u'name': u'client.openstack', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', u'osd_cap': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'}) => {\"ansible_facts\": {\"keys_tmp\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}]}, \"changed\": false, \"item\": {\"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r\", \"name\": \"client.openstack\", \"osd_cap\": \"allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics\"}}", "ok: [compute-0] => (item={u'mon_cap': u'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"', u'mds_cap': u'allow *', u'name': u'client.manila', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', u'osd_cap': u'allow rw'}) => {\"ansible_facts\": {\"keys_tmp\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}]}, \"changed\": false, \"item\": {\"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mds_cap\": \"allow *\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"\", \"name\": \"client.manila\", \"osd_cap\": \"allow rw\"}}", "ok: [compute-0] => (item={u'mon_cap': u'allow rw', u'name': u'client.radosgw', u'mgr_cap': u'allow *', u'mode': u'0600', u'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', u'osd_cap': u'allow rwx'}) => {\"ansible_facts\": {\"keys_tmp\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false, \"item\": {\"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mgr_cap\": \"allow *\", \"mode\": \"0600\", \"mon_cap\": \"allow rw\", \"name\": \"client.radosgw\", \"osd_cap\": \"allow rwx\"}}", "", "TASK [ceph-client : set_fact keys - override keys_tmp with keys] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:9", "Friday 22 June 2018 09:17:27 -0400 (0:00:00.205) 0:04:21.041 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"keys\": [{\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}]}, \"changed\": false}", "", "TASK [ceph-client : run a dummy container (sleep 300) from where we can create pool(s)/key(s)] ***", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:15", "Friday 22 June 2018 09:17:27 -0400 (0:00:00.175) 0:04:21.217 *********** ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"-d\", \"-v\", \"/etc/ceph:/etc/ceph:z\", \"--name\", \"ceph-create-keys\", \"--entrypoint=sleep\", \"192.168.24.1:8787/rhceph:3-6\", \"300\"], \"delta\": \"0:00:00.294909\", \"end\": \"2018-06-22 13:17:28.351669\", \"rc\": 0, \"start\": \"2018-06-22 13:17:28.056760\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ffa2eb0b84f2da4c45ccee8015b7ee3089a1521a40090f5ec125c3078378e912\", \"stdout_lines\": [\"ffa2eb0b84f2da4c45ccee8015b7ee3089a1521a40090f5ec125c3078378e912\"]}", "", "TASK [ceph-client : set_fact delegated_node] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:30", "Friday 22 June 2018 09:17:28 -0400 (0:00:00.892) 0:04:22.110 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"delegated_node\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-client : set_fact condition_copy_admin_key] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:34", "Friday 22 June 2018 09:17:28 -0400 (0:00:00.165) 0:04:22.275 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"condition_copy_admin_key\": true}, \"changed\": false}", "", "TASK [ceph-client : set_fact docker_exec_cmd] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:38", "Friday 22 June 2018 09:17:28 -0400 (0:00:00.073) 0:04:22.349 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0 \"}, \"changed\": false}", "", "TASK [ceph-client : create cephx key(s)] ***************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:44", "Friday 22 June 2018 09:17:28 -0400 (0:00:00.134) 0:04:22.484 *********** ", "changed: [compute-0 -> 192.168.24.8] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", 'mon': u\"'allow r'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', 'name': u'client.openstack'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph-authtool\", \"--create-keyring\", \"/etc/ceph/ceph.client.openstack.keyring\", \"--name\", \"client.openstack\", \"--add-key\", \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"--cap\", \"mds\", \"''\", \"--cap\", \"osd\", \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", \"--cap\", \"mgr\", \"'allow *'\", \"--cap\", \"mon\", \"'allow r'\"], \"delta\": \"0:00:00.145042\", \"end\": \"2018-06-22 13:17:29.408032\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"rc\": 0, \"start\": \"2018-06-22 13:17:29.262990\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"creating /etc/ceph/ceph.client.openstack.keyring\\nadded entity client.openstack auth auth(auid = 18446744073709551615 key=AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA== with 0 caps)\", \"stdout_lines\": [\"creating /etc/ceph/ceph.client.openstack.keyring\", \"added entity client.openstack auth auth(auid = 18446744073709551615 key=AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA== with 0 caps)\"]}", "changed: [compute-0 -> 192.168.24.8] => (item={'caps': {'mds': u\"'allow *'\", 'osd': u\"'allow rw'\", 'mon': u'\\'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"\\'', 'mgr': u\"'allow *'\"}, 'name': u'client.manila', 'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', 'mode': u'0600'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph-authtool\", \"--create-keyring\", \"/etc/ceph/ceph.client.manila.keyring\", \"--name\", \"client.manila\", \"--add-key\", \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"--cap\", \"mds\", \"'allow *'\", \"--cap\", \"osd\", \"'allow rw'\", \"--cap\", \"mgr\", \"'allow *'\", \"--cap\", \"mon\", \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\"], \"delta\": \"0:00:00.143312\", \"end\": \"2018-06-22 13:17:30.006629\", \"item\": {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"rc\": 0, \"start\": \"2018-06-22 13:17:29.863317\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"creating /etc/ceph/ceph.client.manila.keyring\\nadded entity client.manila auth auth(auid = 18446744073709551615 key=AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw== with 0 caps)\", \"stdout_lines\": [\"creating /etc/ceph/ceph.client.manila.keyring\", \"added entity client.manila auth auth(auid = 18446744073709551615 key=AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw== with 0 caps)\"]}", "changed: [compute-0 -> 192.168.24.8] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow rwx'\", 'mon': u\"'allow rw'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', 'name': u'client.radosgw'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph-authtool\", \"--create-keyring\", \"/etc/ceph/ceph.client.radosgw.keyring\", \"--name\", \"client.radosgw\", \"--add-key\", \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"--cap\", \"mds\", \"''\", \"--cap\", \"osd\", \"'allow rwx'\", \"--cap\", \"mgr\", \"'allow *'\", \"--cap\", \"mon\", \"'allow rw'\"], \"delta\": \"0:00:00.150798\", \"end\": \"2018-06-22 13:17:30.609805\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"rc\": 0, \"start\": \"2018-06-22 13:17:30.459007\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"creating /etc/ceph/ceph.client.radosgw.keyring\\nadded entity client.radosgw auth auth(auid = 18446744073709551615 key=AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw== with 0 caps)\", \"stdout_lines\": [\"creating /etc/ceph/ceph.client.radosgw.keyring\", \"added entity client.radosgw auth auth(auid = 18446744073709551615 key=AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw== with 0 caps)\"]}", "", "TASK [ceph-client : slurp client cephx key(s)] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:62", "Friday 22 June 2018 09:17:30 -0400 (0:00:01.912) 0:04:24.397 *********** ", "ok: [compute-0 -> 192.168.24.8] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", 'mon': u\"'allow r'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', 'name': u'client.openstack'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUIyTnlwYkFBQUFBQkFBUWxwbHJ0Vm5xbkp6ZGNhSGdUSnNPQT09CgljYXBzIG1kcyA9ICInJyIKCWNhcHMgbWdyID0gIidhbGxvdyAqJyIKCWNhcHMgbW9uID0gIidhbGxvdyByJyIKCWNhcHMgb3NkID0gIidhbGxvdyBjbGFzcy1yZWFkIG9iamVjdF9wcmVmaXggcmJkX2NoaWxkcmVuLCBhbGxvdyByd3ggcG9vbD12b2x1bWVzLCBhbGxvdyByd3ggcG9vbD1iYWNrdXBzLCBhbGxvdyByd3ggcG9vbD12bXMsIGFsbG93IHJ3eCBwb29sPWltYWdlcywgYWxsb3cgcnd4IHBvb2w9bWV0cmljcyciCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}", "ok: [compute-0 -> 192.168.24.8] => (item={'caps': {'mds': u\"'allow *'\", 'osd': u\"'allow rw'\", 'mon': u'\\'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"\\'', 'mgr': u\"'allow *'\"}, 'name': u'client.manila', 'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', 'mode': u'0600'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUIyTnlwYkFBQUFBQkFBYXU3UmxhWkw1eXZMVjlGa01FblVWdz09CgljYXBzIG1kcyA9ICInYWxsb3cgKiciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgciwgYWxsb3cgY29tbWFuZCBcImF1dGggZGVsXCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGNhcHNcIiwgYWxsb3cgY29tbWFuZCBcImF1dGggZ2V0XCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGdldC1vci1jcmVhdGVcIiciCgljYXBzIG9zZCA9ICInYWxsb3cgcncnIgo=\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}", "ok: [compute-0 -> 192.168.24.8] => (item={'caps': {'mds': u\"''\", 'osd': u\"'allow rwx'\", 'mon': u\"'allow rw'\", 'mgr': u\"'allow *'\"}, 'mode': u'0600', 'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', 'name': u'client.radosgw'}) => {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFCMk55cGJBQUFBQUJBQTJlVTBsYURJaUpHajU2TzMwS29JZHc9PQoJY2FwcyBtZHMgPSAiJyciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgcncnIgoJY2FwcyBvc2QgPSAiJ2FsbG93IHJ3eCciCg==\", \"encoding\": \"base64\", \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}", "", "TASK [ceph-client : list existing pool(s)] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:74", "Friday 22 June 2018 09:17:32 -0400 (0:00:01.377) 0:04:25.774 *********** ", "", "TASK [ceph-client : create ceph pool(s)] ***************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:86", "Friday 22 June 2018 09:17:32 -0400 (0:00:00.042) 0:04:25.817 *********** ", "", "TASK [ceph-client : kill a dummy container that created pool(s)/key(s)] ********", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:109", "Friday 22 June 2018 09:17:32 -0400 (0:00:00.042) 0:04:25.859 *********** ", "ok: [compute-0] => {\"changed\": false, \"cmd\": [\"docker\", \"rm\", \"-f\", \"ceph-create-keys\"], \"delta\": \"0:00:00.135610\", \"end\": \"2018-06-22 13:17:32.748685\", \"rc\": 0, \"start\": \"2018-06-22 13:17:32.613075\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph-create-keys\", \"stdout_lines\": [\"ceph-create-keys\"]}", "", "TASK [ceph-client : get client cephx keys] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:116", "Friday 22 June 2018 09:17:32 -0400 (0:00:00.651) 0:04:26.511 *********** ", "changed: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUIyTnlwYkFBQUFBQkFBUWxwbHJ0Vm5xbkp6ZGNhSGdUSnNPQT09CgljYXBzIG1kcyA9ICInJyIKCWNhcHMgbWdyID0gIidhbGxvdyAqJyIKCWNhcHMgbW9uID0gIidhbGxvdyByJyIKCWNhcHMgb3NkID0gIidhbGxvdyBjbGFzcy1yZWFkIG9iamVjdF9wcmVmaXggcmJkX2NoaWxkcmVuLCBhbGxvdyByd3ggcG9vbD12b2x1bWVzLCBhbGxvdyByd3ggcG9vbD1iYWNrdXBzLCBhbGxvdyByd3ggcG9vbD12bXMsIGFsbG93IHJ3eCBwb29sPWltYWdlcywgYWxsb3cgcnd4IHBvb2w9bWV0cmljcyciCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.openstack.keyring', 'item': {'mode': u'0600', 'name': u'client.openstack', 'key': u'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==', 'caps': {'mds': u\"''\", 'osd': u\"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\", 'mgr': u\"'allow *'\", 'mon': u\"'allow r'\"}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.openstack.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"4d6e0bd376eba7986733f512ff8b09821ea74177\", \"dest\": \"/etc/ceph/ceph.client.openstack.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5vcGVuc3RhY2tdCglrZXkgPSBBUUIyTnlwYkFBQUFBQkFBUWxwbHJ0Vm5xbkp6ZGNhSGdUSnNPQT09CgljYXBzIG1kcyA9ICInJyIKCWNhcHMgbWdyID0gIidhbGxvdyAqJyIKCWNhcHMgbW9uID0gIidhbGxvdyByJyIKCWNhcHMgb3NkID0gIidhbGxvdyBjbGFzcy1yZWFkIG9iamVjdF9wcmVmaXggcmJkX2NoaWxkcmVuLCBhbGxvdyByd3ggcG9vbD12b2x1bWVzLCBhbGxvdyByd3ggcG9vbD1iYWNrdXBzLCBhbGxvdyByd3ggcG9vbD12bXMsIGFsbG93IHJ3eCBwb29sPWltYWdlcywgYWxsb3cgcnd4IHBvb2w9bWV0cmljcyciCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.openstack.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r'\", \"osd\": \"'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics'\"}, \"key\": \"AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"source\": \"/etc/ceph/ceph.client.openstack.keyring\"}, \"md5sum\": \"2717ff4f690665b611bacab8236d6e50\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 307, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673452.83-90501969988221/source\", \"state\": \"file\", \"uid\": 167}", "changed: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUIyTnlwYkFBQUFBQkFBYXU3UmxhWkw1eXZMVjlGa01FblVWdz09CgljYXBzIG1kcyA9ICInYWxsb3cgKiciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgciwgYWxsb3cgY29tbWFuZCBcImF1dGggZGVsXCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGNhcHNcIiwgYWxsb3cgY29tbWFuZCBcImF1dGggZ2V0XCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGdldC1vci1jcmVhdGVcIiciCgljYXBzIG9zZCA9ICInYWxsb3cgcncnIgo=', 'failed': False, u'source': u'/etc/ceph/ceph.client.manila.keyring', 'item': {'name': u'client.manila', 'mode': u'0600', 'key': u'AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==', 'caps': {'mds': u\"'allow *'\", 'osd': u\"'allow rw'\", 'mgr': u\"'allow *'\", 'mon': u'\\'allow r, allow command \\\\\"auth del\\\\\", allow command \\\\\"auth caps\\\\\", allow command \\\\\"auth get\\\\\", allow command \\\\\"auth get-or-create\\\\\"\\''}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.manila.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"3526ba4ba9af42743214640b911c0f92e35ad076\", \"dest\": \"/etc/ceph/ceph.client.manila.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5tYW5pbGFdCglrZXkgPSBBUUIyTnlwYkFBQUFBQkFBYXU3UmxhWkw1eXZMVjlGa01FblVWdz09CgljYXBzIG1kcyA9ICInYWxsb3cgKiciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgciwgYWxsb3cgY29tbWFuZCBcImF1dGggZGVsXCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGNhcHNcIiwgYWxsb3cgY29tbWFuZCBcImF1dGggZ2V0XCIsIGFsbG93IGNvbW1hbmQgXCJhdXRoIGdldC1vci1jcmVhdGVcIiciCgljYXBzIG9zZCA9ICInYWxsb3cgcncnIgo=\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.manila.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"'allow *'\", \"mgr\": \"'allow *'\", \"mon\": \"'allow r, allow command \\\\\\\"auth del\\\\\\\", allow command \\\\\\\"auth caps\\\\\\\", allow command \\\\\\\"auth get\\\\\\\", allow command \\\\\\\"auth get-or-create\\\\\\\"'\", \"osd\": \"'allow rw'\"}, \"key\": \"AQB2NypbAAAAABAAau7RlaZL5yvLV9FkMEnUVw==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"source\": \"/etc/ceph/ceph.client.manila.keyring\"}, \"md5sum\": \"2acc0be8ca9bbd36db382a6bc3ce46bd\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 284, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673455.19-4454543819/source\", \"state\": \"file\", \"uid\": 167}", "changed: [compute-0] => (item={'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'encoding': u'base64', '_ansible_item_result': True, u'content': u'W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFCMk55cGJBQUFBQUJBQTJlVTBsYURJaUpHajU2TzMwS29JZHc9PQoJY2FwcyBtZHMgPSAiJyciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgcncnIgoJY2FwcyBvc2QgPSAiJ2FsbG93IHJ3eCciCg==', 'failed': False, u'source': u'/etc/ceph/ceph.client.radosgw.keyring', 'item': {'mode': u'0600', 'name': u'client.radosgw', 'key': u'AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==', 'caps': {'mds': u\"''\", 'osd': u\"'allow rwx'\", 'mgr': u\"'allow *'\", 'mon': u\"'allow rw'\"}}, u'invocation': {u'module_args': {u'src': u'/etc/ceph/ceph.client.radosgw.keyring'}}, '_ansible_delegated_vars': {'ansible_delegated_host': u'controller-0', 'ansible_host': u'192.168.24.8'}, '_ansible_ignore_errors': None}) => {\"changed\": true, \"checksum\": \"242621e5f01c2a1fde923935408055d2268888b1\", \"dest\": \"/etc/ceph/ceph.client.radosgw.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"changed\": false, \"content\": \"W2NsaWVudC5yYWRvc2d3XQoJa2V5ID0gQVFCMk55cGJBQUFBQUJBQTJlVTBsYURJaUpHajU2TzMwS29JZHc9PQoJY2FwcyBtZHMgPSAiJyciCgljYXBzIG1nciA9ICInYWxsb3cgKiciCgljYXBzIG1vbiA9ICInYWxsb3cgcncnIgoJY2FwcyBvc2QgPSAiJ2FsbG93IHJ3eCciCg==\", \"encoding\": \"base64\", \"failed\": false, \"invocation\": {\"module_args\": {\"src\": \"/etc/ceph/ceph.client.radosgw.keyring\"}}, \"item\": {\"caps\": {\"mds\": \"''\", \"mgr\": \"'allow *'\", \"mon\": \"'allow rw'\", \"osd\": \"'allow rwx'\"}, \"key\": \"AQB2NypbAAAAABAA2eU0laDIiJGj56O30KoIdw==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"source\": \"/etc/ceph/ceph.client.radosgw.keyring\"}, \"md5sum\": \"1791e54f0adfcef256a26063d743e45d\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 157, \"src\": \"/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673457.5-101088153961914/source\", \"state\": \"file\", \"uid\": 167}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Friday 22 June 2018 09:17:39 -0400 (0:00:07.141) 0:04:33.652 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Friday 22 June 2018 09:17:39 -0400 (0:00:00.065) 0:04:33.718 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Friday 22 June 2018 09:17:39 -0400 (0:00:00.041) 0:04:33.760 *********** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.075) 0:04:33.836 *********** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.076) 0:04:33.913 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.062) 0:04:33.975 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.062) 0:04:34.038 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.042) 0:04:34.080 *********** ", "skipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.069) 0:04:34.150 *********** ", "skipping: [compute-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.073) 0:04:34.224 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.064) 0:04:34.289 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.063) 0:04:34.353 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.040) 0:04:34.394 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.048) 0:04:34.442 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.047) 0:04:34.490 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.064) 0:04:34.555 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.065) 0:04:34.620 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.041) 0:04:34.661 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.046) 0:04:34.708 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Friday 22 June 2018 09:17:40 -0400 (0:00:00.047) 0:04:34.755 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Friday 22 June 2018 09:17:41 -0400 (0:00:00.063) 0:04:34.819 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Friday 22 June 2018 09:17:41 -0400 (0:00:00.061) 0:04:34.881 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Friday 22 June 2018 09:17:41 -0400 (0:00:00.040) 0:04:34.922 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Friday 22 June 2018 09:17:41 -0400 (0:00:00.053) 0:04:34.975 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Friday 22 June 2018 09:17:41 -0400 (0:00:00.048) 0:04:35.024 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Friday 22 June 2018 09:17:41 -0400 (0:00:00.064) 0:04:35.088 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Friday 22 June 2018 09:17:41 -0400 (0:00:00.062) 0:04:35.151 *********** ", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Friday 22 June 2018 09:17:41 -0400 (0:00:00.042) 0:04:35.193 *********** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Friday 22 June 2018 09:17:41 -0400 (0:00:00.074) 0:04:35.268 *********** ", "skipping: [compute-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Friday 22 June 2018 09:17:41 -0400 (0:00:00.071) 0:04:35.339 *********** ", "ok: [compute-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [set ceph client install 'Complete'] **************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:324", "Friday 22 June 2018 09:17:41 -0400 (0:00:00.199) 0:04:35.538 *********** ", "ok: [compute-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_client\": {\"end\": \"20180622091741Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "PLAY RECAP *********************************************************************", "ceph-0 : ok=88 changed=18 unreachable=0 failed=0 ", "compute-0 : ok=57 changed=7 unreachable=0 failed=0 ", "controller-0 : ok=119 changed=20 unreachable=0 failed=0 ", "", "", "INSTALLER STATUS ***************************************************************", "Install Ceph Monitor : Complete (0:01:08)", "Install Ceph Manager : Complete (0:00:38)", "Install Ceph OSD : Complete (0:01:45)", "Install Ceph Client : Complete (0:00:54)", "", "Friday 22 June 2018 09:17:41 -0400 (0:00:00.156) 0:04:35.694 *********** ", "=============================================================================== ", "ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image -------- 17.16s", "/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179 ----", "ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image -------- 16.72s", "/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179 ----", "ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-6 image -------- 16.35s", "/usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179 ----", "gather and delegate facts ----------------------------------------------- 8.62s", "/usr/share/ceph-ansible/site-docker.yml.sample:29 -----------------------------", "ceph-osd : prepare ceph containerized osd disk collocated --------------- 7.55s", "/usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:5 -------", "ceph-osd : create openstack pool(s) ------------------------------------- 7.36s", "/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:21 ----------", "ceph-client : get client cephx keys ------------------------------------- 7.14s", "/usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml:116 -----", "ceph-osd : assign application to pool(s) -------------------------------- 6.04s", "/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:41 ----------", "ceph-osd : copy to other mons the openstack cephx key(s) ---------------- 5.41s", "/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:71 ----------", "ceph-defaults : create ceph initial directories ------------------------- 5.36s", "/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 ", "ceph-defaults : create ceph initial directories ------------------------- 5.34s", "/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 ", "ceph-defaults : create ceph initial directories ------------------------- 5.30s", "/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 ", "ceph-defaults : create ceph initial directories ------------------------- 5.08s", "/usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18 ", "ceph-osd : list existing pool(s) ---------------------------------------- 4.31s", "/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:12 ----------", "ceph-osd : create openstack cephx key(s) -------------------------------- 4.10s", "/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:50 ----------", "ceph-config : generate ceph.conf configuration file --------------------- 3.32s", "/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84 -------------------", "ceph-config : generate ceph.conf configuration file --------------------- 3.08s", "/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84 -------------------", "ceph-config : generate ceph.conf configuration file --------------------- 3.02s", "/usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84 -------------------", "ceph-mon : push ceph files to the ansible server ------------------------ 2.89s", "/usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2 -------", "ceph-mgr : generate systemd unit file ----------------------------------- 2.88s", "/usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:2 ----"]} >2018-06-22 09:17:42,309 p=21516 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-06-22 09:17:42,328 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,347 p=21516 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-06-22 09:17:42,365 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,381 p=21516 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-06-22 09:17:42,399 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,417 p=21516 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-06-22 09:17:42,433 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,450 p=21516 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-06-22 09:17:42,467 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,482 p=21516 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-06-22 09:17:42,499 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,515 p=21516 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-06-22 09:17:42,532 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,548 p=21516 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-06-22 09:17:42,566 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,571 p=21516 u=mistral | PLAY [Overcloud deploy step tasks for 2] *************************************** >2018-06-22 09:17:42,594 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:17:42,621 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,644 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,664 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,690 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:17:42,740 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,741 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,752 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,773 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:17:42,799 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,830 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,835 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,857 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:17:42,907 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,914 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,919 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,944 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:17:42,995 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:42,997 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,013 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,018 p=21516 u=mistral | PLAY [Overcloud common deploy step tasks 2] ************************************ >2018-06-22 09:17:43,049 p=21516 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-06-22 09:17:43,078 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,102 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,115 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,137 p=21516 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-06-22 09:17:43,164 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,191 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,202 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,224 p=21516 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-06-22 09:17:43,254 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,321 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,337 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,359 p=21516 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-06-22 09:17:43,388 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,412 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,429 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,451 p=21516 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-06-22 09:17:43,476 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,499 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,510 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,531 p=21516 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-06-22 09:17:43,558 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,582 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,598 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,621 p=21516 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-06-22 09:17:43,679 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': 'nova_api_discover_hosts.sh'}) => {"changed": false, "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,680 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': 'create_swift_secret.sh'}) => {"changed": false, "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,680 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': 'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,681 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,688 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': u'set_swift_keymaster_key_id.sh'}) => {"changed": false, "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,689 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': u'docker_puppet_apply.sh'}) => {"changed": false, "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,689 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": false, "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,712 p=21516 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-06-22 09:17:43,743 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,743 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,768 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,769 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,770 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,770 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,771 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,774 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,775 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,775 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,778 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,780 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,786 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,789 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,793 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,798 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,802 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,806 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,827 p=21516 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-06-22 09:17:43,852 p=21516 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,875 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,886 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:43,912 p=21516 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-06-22 09:17:43,939 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,967 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,976 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:43,997 p=21516 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-06-22 09:17:44,064 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,071 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,084 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,093 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'memcached_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log'], 'user': u'root', 'volumes': [u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'detach': False, 'privileged': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=8omuhCCcfP1YuJzPZS8tLp3AL', u'DB_ROOT_PASSWORD=zeHIZe0ICg'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=n8jIt9appI3hU5NXoG3W'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "memcached_init_logs": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=8omuhCCcfP1YuJzPZS8tLp3AL", "DB_ROOT_PASSWORD=zeHIZe0ICg"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=n8jIt9appI3hU5NXoG3W"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,109 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'6CLNy5Ewot5UhcBYmt27oGDMD'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "6CLNy5Ewot5UhcBYmt27oGDMD"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,134 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,155 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_statsd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'gnocchi_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,178 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,193 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,250 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,251 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,252 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,254 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,257 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_libvirt': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,260 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,260 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,264 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '53912472-747b-11e8-95a3-5254003d7dcb' --base64 'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '53912472-747b-11e8-95a3-5254003d7dcb' --base64 'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,267 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,297 p=21516 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-06-22 09:17:44,353 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,354 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,363 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,386 p=21516 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-06-22 09:17:44,467 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,490 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': u'/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,494 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,495 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': u'/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,496 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': u'/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,497 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,498 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/var/lib/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/var/lib/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,499 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,502 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,576 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,580 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/keystone.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,584 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,589 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,593 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,597 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,602 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,606 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,611 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,615 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': '/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,620 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/haproxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,623 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,627 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,631 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,635 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,640 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/redis.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,644 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,648 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/glance_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,653 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,656 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,660 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,664 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,669 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,673 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': '/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,678 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,682 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,686 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,689 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,693 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': '/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,698 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/mysql.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,701 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,705 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,710 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,716 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,721 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,723 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,727 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,733 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,739 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,741 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,746 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,755 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,756 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,758 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,769 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,771 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,773 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,776 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,780 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,788 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,795 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,801 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': '/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,808 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,813 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,819 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,822 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/panko_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,832 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,836 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,842 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,852 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,852 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,856 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,864 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,869 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,879 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': u'/var/lib/kolla/config_files/horizon.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:44,931 p=21516 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-06-22 09:17:44,944 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:17:44,974 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:17:45,005 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:17:45,036 p=21516 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-06-22 09:17:45,098 p=21516 u=mistral | skipping: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4'}], 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:45,136 p=21516 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-06-22 09:17:45,165 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:45,192 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:45,207 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:17:45,228 p=21516 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-06-22 09:17:45,927 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "f17091ee142621a3c8290c8c96b5b52d67b3a864", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "0c07a8d2f57375a6b7ce729be89e77fb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673465.27-259247503778359/source", "state": "file", "uid": 0} >2018-06-22 09:17:45,976 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "f17091ee142621a3c8290c8c96b5b52d67b3a864", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "0c07a8d2f57375a6b7ce729be89e77fb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673465.33-114091193686332/source", "state": "file", "uid": 0} >2018-06-22 09:17:45,984 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "f17091ee142621a3c8290c8c96b5b52d67b3a864", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "0c07a8d2f57375a6b7ce729be89e77fb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673465.29-43290553494121/source", "state": "file", "uid": 0} >2018-06-22 09:17:46,008 p=21516 u=mistral | TASK [Run puppet host configuration for step 2] ******************************** >2018-06-22 09:17:55,179 p=21516 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:17:55,428 p=21516 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:17:59,573 p=21516 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:17:59,595 p=21516 u=mistral | TASK [Debug output for task which failed: Run puppet host configuration for step 2] *** >2018-06-22 09:17:59,651 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.82 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller2]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/mode: mode changed '0400' to '0640'", > "Notice: Applied catalog in 3.59 seconds", > "Changes:", > " Total: 4", > "Events:", > " Success: 4", > "Resources:", > " Corrective change: 2", > " Total: 217", > " Out of sync: 4", > " Changed: 4", > "Time:", > " Concat file: 0.00", > " File line: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.00", > " Augeas: 0.02", > " Firewall: 0.02", > " File: 0.11", > " Service: 0.14", > " Pcmk property: 0.33", > " Package: 0.33", > " Exec: 0.77", > " Pcmk resource default: 1.01", > " Last run: 1529673479", > " Config retrieval: 3.35", > " Total: 6.09", > " Concat fragment: 0.00", > " Filebucket: 0.00", > "Version:", > " Config: 1529673472", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:17:59,669 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.68 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.25 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 141", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Cron: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.01", > " File: 0.04", > " Service: 0.11", > " Exec: 0.26", > " Package: 0.29", > " Config retrieval: 1.98", > " Last run: 1529673475", > " Total: 2.73", > " Concat fragment: 0.00", > "Version:", > " Config: 1529673471", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:17:59,694 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.83 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.20 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 135", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Concat file: 0.00", > " Anchor: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.00", > " Firewall: 0.01", > " Augeas: 0.01", > " File: 0.10", > " Service: 0.11", > " Package: 0.23", > " Exec: 0.26", > " Last run: 1529673474", > " Config retrieval: 2.11", > " Total: 2.85", > " Concat fragment: 0.00", > " Filebucket: 0.00", > "Version:", > " Config: 1529673471", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:17:59,716 p=21516 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 2] ***************** >2018-06-22 09:17:59,741 p=21516 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:59,765 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:59,777 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:17:59,798 p=21516 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 2] *** >2018-06-22 09:17:59,823 p=21516 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:17:59,846 p=21516 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:17:59,859 p=21516 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:17:59,879 p=21516 u=mistral | TASK [Start containers for step 2] ********************************************* >2018-06-22 09:18:00,557 p=21516 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:18:00,564 p=21516 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:32,857 p=21516 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:32,878 p=21516 u=mistral | TASK [Debug output for task which failed: Start containers for step 2] ********* >2018-06-22 09:24:32,975 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-22 09:24:32,988 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-22 09:24:44,582 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-scheduler ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-scheduler", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "5e7b63a88a76: Already exists", > "5ff72e309cb2: Pulling fs layer", > "5ff72e309cb2: Download complete", > "5ff72e309cb2: Pull complete", > "Digest: sha256:66bdbed6e9d047b6e66b91abd2d4b5be29c06391601b7bbb7af3cac7974e15da", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-engine ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-engine", > "15497368e843: Already exists", > "b539b60217fe: Pulling fs layer", > "b539b60217fe: Verifying Checksum", > "b539b60217fe: Download complete", > "b539b60217fe: Pull complete", > "Digest: sha256:9bcd08156fc092b635fb8385d245b910af8a5e947388653ef3487dea959f5f20", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent", > "ea1d509b6f44: Already exists", > "84e2c5d46617: Pulling fs layer", > "84e2c5d46617: Verifying Checksum", > "84e2c5d46617: Download complete", > "84e2c5d46617: Pull complete", > "Digest: sha256:2b12cb81fb6a7677dac134f3ed7968a8291a4916cb68f30860f975a86eb5b2c7", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent", > "7ed3720e5907: Pulling fs layer", > "7ed3720e5907: Verifying Checksum", > "7ed3720e5907: Download complete", > "7ed3720e5907: Pull complete", > "Digest: sha256:1d4798d4eeddf04bbfb28605177aa39be1822e328d94cc562d4b0dd9cd2b72ba", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", > "stdout: 5f56eff544b3a805cd08d9e7ff1d5819bbefe82977a9b507587ec2044fca186d", > "stdout: ", > "stderr: Error: unable to find resource 'galera-bundle'", > "stdout: 331f2f35b2ff1d428a9a078a6c1cc931f5f4b65437b87f2e59c4abb0a540fb09", > "stdout: 9274eb8439855a4b6442dcce1145ff46c4492c208d44d21afa62719d6d5e1b27", > "stdout: 4380cddf66e1d95fb37049c913ec6d0e4d7e6a2a9f6a790ab0edccda5a86e878", > "stdout: Skipping execution since this is not the bootstrap node for this service.", > "stdout: 34cf8d7533dd5b75c2799072296e916e2795b41fbc036d5a2411695590303a5b", > "stdout: d07240d8ad40965b34651730ab5538415e21bdd996ad0318e66be530cc069105", > "stdout: 76f76685a3d0c835c1967a815b36ac07346a6fa4cf42f11299ee85d4ff7c7f71", > "stdout: ba976feee8a0ca72780ceb6b9b10c632affd1f784cd701178a0905a28d5a5a29", > "stdout: c5b464d5bbe7471715360c1b3b8cdd41057834bbb0e13c4f2dbd69757f9b0e1b", > "stdout: 681e2c42b5716f3f298018ce1a46fb20c3a74aced784e5318969bcc12603f726", > "stdout: d049dc4bd51bbe498d470c0cf41a2f2e2890b37afcb53b465a9f0deb46dd0a47", > "stdout: d52e558629eb79761254bea185e10c5fe862d987895eb060e8efd3d16f04e918", > "stdout: Debug: Runtime environment: puppet_version=4.8.2, ruby_version=2.0.0, run_mode=user, default_encoding=US-ASCII", > "Debug: Evicting cache entry for environment 'production'", > "Debug: Caching environment 'production' (ttl = 0 sec)", > "Debug: Loading external facts from /etc/puppet/modules/openstacklib/facts.d", > "Debug: Loading external facts from /var/lib/puppet/facts.d", > "Info: Loading facts", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /etc/puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /etc/puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pcmk_is_remote.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /etc/puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /etc/puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/nic_alias.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /etc/puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/sensu/lib/facter/sensu_version.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/ipa_hostname.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pcmk_is_remote.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/nic_alias.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/sensu/lib/facter/sensu_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/ipa_hostname.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Facter: Found no suitable resolves of 1 for ec2_metadata", > "Debug: Facter: value for ec2_metadata is still nil", > "Debug: Executing: '/usr/bin/rpm --version'", > "Debug: Failed to load library 'cfpropertylist' for feature 'cfpropertylist'", > "Debug: Executing: '/usr/bin/rpm -ql rpm'", > "Debug: Facter: value for agent_specified_environment is still nil", > "Debug: Facter: value for cfkey is still nil", > "Debug: Facter: Found no suitable resolves of 1 for dhcp_servers", > "Debug: Facter: value for dhcp_servers is still nil", > "Debug: Facter: Found no suitable resolves of 1 for gce", > "Debug: Facter: value for gce is still nil", > "Debug: Facter: value for ipaddress6_br_ex is still nil", > "Debug: Facter: value for ipaddress_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_br_isolated is still nil", > "Debug: Facter: value for netmask_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_docker0 is still nil", > "Debug: Facter: value for ipaddress6_eth0 is still nil", > "Debug: Facter: value for ipaddress_eth1 is still nil", > "Debug: Facter: value for ipaddress6_eth1 is still nil", > "Debug: Facter: value for netmask_eth1 is still nil", > "Debug: Facter: value for ipaddress_eth2 is still nil", > "Debug: Facter: value for ipaddress6_eth2 is still nil", > "Debug: Facter: value for netmask_eth2 is still nil", > "Debug: Facter: value for ipaddress6_lo is still nil", > "Debug: Facter: value for macaddress_lo is still nil", > "Debug: Facter: value for ipaddress_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_ovs_system is still nil", > "Debug: Facter: value for netmask_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_vlan20 is still nil", > "Debug: Facter: value for ipaddress6_vlan30 is still nil", > "Debug: Facter: value for ipaddress6_vlan40 is still nil", > "Debug: Facter: value for ipaddress6_vlan50 is still nil", > "Debug: Facter: value for ipaddress6 is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iphostnumber", > "Debug: Facter: value for iphostnumber is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistcodename", > "Debug: Facter: value for lsbdistcodename is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistdescription", > "Debug: Facter: value for lsbdistdescription is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistid", > "Debug: Facter: value for lsbdistid is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistrelease", > "Debug: Facter: value for lsbdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbmajdistrelease", > "Debug: Facter: value for lsbmajdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbminordistrelease", > "Debug: Facter: value for lsbminordistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbrelease", > "Debug: Facter: value for lsbrelease is still nil", > "Debug: Facter: Found no suitable resolves of 2 for swapencrypted", > "Debug: Facter: value for swapencrypted is still nil", > "Debug: Facter: value for network_br_isolated is still nil", > "Debug: Facter: value for network_eth1 is still nil", > "Debug: Facter: value for network_eth2 is still nil", > "Debug: Facter: value for network_ovs_system is still nil", > "Debug: Facter: Found no suitable resolves of 1 for processor", > "Debug: Facter: value for processor is still nil", > "Debug: Facter: value for is_rsc is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_region", > "Debug: Facter: value for rsc_region is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_instance_id", > "Debug: Facter: value for rsc_instance_id is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_enforced", > "Debug: Facter: value for selinux_enforced is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_policyversion", > "Debug: Facter: value for selinux_policyversion is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_current_mode", > "Debug: Facter: value for selinux_current_mode is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_config_mode", > "Debug: Facter: value for selinux_config_mode is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_config_policy", > "Debug: Facter: value for selinux_config_policy is still nil", > "Debug: Facter: value for sshdsakey is still nil", > "Debug: Facter: value for sshfp_dsa is still nil", > "Debug: Facter: value for sshrsakey is still nil", > "Debug: Facter: value for sshfp_rsa is still nil", > "Debug: Facter: value for sshecdsakey is still nil", > "Debug: Facter: value for sshfp_ecdsa is still nil", > "Debug: Facter: value for sshed25519key is still nil", > "Debug: Facter: value for sshfp_ed25519 is still nil", > "Debug: Facter: Found no suitable resolves of 1 for system32", > "Debug: Facter: value for system32 is still nil", > "Debug: Facter: value for vlans is still nil", > "Debug: Facter: Found no suitable resolves of 1 for xendomains", > "Debug: Facter: value for xendomains is still nil", > "Debug: Facter: value for zfs_version is still nil", > "Debug: Facter: Found no suitable resolves of 1 for zonename", > "Debug: Facter: value for zonename is still nil", > "Debug: Facter: value for zpool_version is still nil", > "Debug: Facter: value for java_version is still nil", > "Debug: Facter: value for java_major_version is still nil", > "Debug: Facter: value for java_patch_level is still nil", > "Debug: Facter: value for java_default_home is still nil", > "Debug: Facter: value for java_libjvm_path is still nil", > "Debug: Facter: value for ssh_client_version_full is still nil", > "Debug: Facter: value for ssh_client_version_major is still nil", > "Debug: Facter: value for ssh_client_version_release is still nil", > "Debug: Facter: value for ssh_server_version_full is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_server_version_major", > "Debug: Facter: value for ssh_server_version_major is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_server_version_release", > "Debug: Facter: value for ssh_server_version_release is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iptables_persistent_version", > "Debug: Facter: value for iptables_persistent_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for staging_windir", > "Debug: Facter: value for staging_windir is still nil", > "Debug: Facter: value for cassandrarelease is still nil", > "Debug: Facter: value for cassandraminorversion is still nil", > "Debug: Facter: value for cassandrapatchversion is still nil", > "Debug: Facter: value for cassandramajorversion is still nil", > "Debug: Facter: value for mysqld_version is still nil", > "Debug: Facter: value for mysql_version is still nil", > "Debug: Facter: value for git_html_path is still nil", > "Debug: Facter: value for git_version is still nil", > "Debug: Facter: value for git_exec_path is still nil", > "Debug: Facter: value for collectd_version is still nil", > "Debug: Facter: value for sssd_version is still nil", > "Debug: Facter: value for rabbitmq_nodename is still nil", > "Debug: Facter: value for rabbitmq_version is still nil", > "Debug: Puppet::Type::Package::ProviderSensu_gem: file /opt/sensu/embedded/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderTdagent: file /opt/td-agent/usr/sbin/td-agent-gem does not exist", > "Debug: Puppet::Type::Package::ProviderAix: file /usr/bin/lslpp does not exist", > "Debug: Puppet::Type::Package::ProviderDpkg: file /usr/bin/dpkg does not exist", > "Debug: Puppet::Type::Package::ProviderApt: file /usr/bin/apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderAptitude: file /usr/bin/aptitude does not exist", > "Debug: Puppet::Type::Package::ProviderAptrpm: file apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderSun: file /usr/bin/pkginfo does not exist", > "Debug: Puppet::Type::Package::ProviderDnf: file dnf does not exist", > "Debug: Puppet::Type::Package::ProviderFink: file /sw/bin/fink does not exist", > "Debug: Puppet::Type::Package::ProviderOpenbsd: file pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderFreebsd: file /usr/sbin/pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderHpux: file /usr/sbin/swinstall does not exist", > "Debug: Puppet::Type::Package::ProviderNim: file /usr/sbin/nimclient does not exist", > "Debug: Puppet::Type::Package::ProviderOpkg: file opkg does not exist", > "Debug: Puppet::Type::Package::ProviderPacman: file /usr/bin/pacman does not exist", > "Debug: Puppet::Type::Package::ProviderPkg: file /usr/bin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderPkgin: file pkgin does not exist", > "Debug: Puppet::Type::Package::ProviderPkgng: file /usr/local/sbin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderPortage: file /usr/bin/emerge does not exist", > "Debug: Puppet::Type::Package::ProviderPorts: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPortupgrade: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPuppet_gem: file /opt/puppetlabs/puppet/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderRug: file /usr/bin/rug does not exist", > "Debug: Puppet::Type::Package::ProviderSunfreeware: file pkg-get does not exist", > "Debug: Puppet::Type::Package::ProviderTdnf: file tdnf does not exist", > "Debug: Puppet::Type::Package::ProviderUp2date: file /usr/sbin/up2date-nox does not exist", > "Debug: Puppet::Type::Package::ProviderUrpmi: file urpmi does not exist", > "Debug: Puppet::Type::Package::ProviderZypper: file /usr/bin/zypper does not exist", > "Debug: Facter: value for pe_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_major_version", > "Debug: Facter: value for pe_major_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_minor_version", > "Debug: Facter: value for pe_minor_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_patch_version", > "Debug: Facter: value for pe_patch_version is still nil", > "Debug: Puppet::Type::Service::ProviderNoop: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderInit: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderDaemontools: file /usr/bin/svc does not exist", > "Debug: Puppet::Type::Service::ProviderDebian: file /usr/sbin/update-rc.d does not exist", > "Debug: Puppet::Type::Service::ProviderGentoo: file /sbin/rc-update does not exist", > "Debug: Puppet::Type::Service::ProviderLaunchd: file /bin/launchctl does not exist", > "Debug: Puppet::Type::Service::ProviderOpenbsd: file /usr/sbin/rcctl does not exist", > "Debug: Puppet::Type::Service::ProviderOpenrc: file /bin/rc-status does not exist", > "Debug: Puppet::Type::Service::ProviderRedhat: file /sbin/service does not exist", > "Debug: Puppet::Type::Service::ProviderRunit: file /usr/bin/sv does not exist", > "Debug: Puppet::Type::Service::ProviderUpstart: 0 confines (of 4) were true", > "Debug: Facter: value for redis_server_version is still nil", > "Debug: Facter: value for apache_version is still nil", > "Debug: Facter: value for nic_alias is still nil", > "Debug: Facter: value for netmask6_ovs_system is still nil", > "Debug: Facter: value for ovs_uuid is still nil", > "Debug: Facter: value for ovs_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for archive_windir", > "Debug: Facter: value for archive_windir is still nil", > "Debug: Facter: value for sensu_version is still nil", > "Debug: Facter: value for ipa_hostname is still nil", > "Debug: Facter: value for libvirt_uuid is still nil", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/pacemaker.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::pacemaker from tripleo/profile/base/pacemaker into production", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Debug: hiera(): Hiera JSON backend starting", > "Debug: hiera(): Looking up lookup_options in JSON backend", > "Debug: hiera(): Looking for data source docker", > "Debug: hiera(): Looking for data source heat_config_", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/heat_config_.json, skipping", > "Debug: hiera(): Looking for data source config_step", > "Debug: hiera(): Looking for data source controller_extraconfig", > "Debug: hiera(): Looking for data source extraconfig", > "Debug: hiera(): Looking for data source service_names", > "Debug: hiera(): Looking for data source service_configs", > "Debug: hiera(): Looking for data source controller", > "Debug: hiera(): Looking for data source bootstrap_node", > "Debug: hiera(): Looking for data source all_nodes", > "Debug: hiera(): Looking for data source vip_data", > "Debug: hiera(): Looking for data source net_ip_map", > "Debug: hiera(): Looking for data source RedHat", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/RedHat.json, skipping", > "Debug: hiera(): Looking for data source neutron_bigswitch_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_bigswitch_data.json, skipping", > "Debug: hiera(): Looking for data source neutron_cisco_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_cisco_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_n1kv_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_n1kv_data.json, skipping", > "Debug: hiera(): Looking for data source midonet_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/midonet_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_aci_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_aci_data.json, skipping", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_node_ips in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_authkey in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::encryption in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::enable_instanceha in JSON backend", > "Debug: hiera(): Looking up step in JSON backend", > "Debug: hiera(): Looking up pcs_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_node_ips in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker_cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::instanceha in JSON backend", > "Debug: hiera(): Looking up hacluster_pwd in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up enable_fencing in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_node_names in JSON backend", > "Debug: hiera(): Looking up corosync_ipv6 in JSON backend", > "Debug: hiera(): Looking up corosync_token_timeout in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/init.pp' in environment production", > "Debug: Automatically imported pacemaker from pacemaker into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/params.pp' in environment production", > "Debug: Automatically imported pacemaker::params from pacemaker/params into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/install.pp' in environment production", > "Debug: Automatically imported pacemaker::install from pacemaker/install into production", > "Debug: hiera(): Looking up pacemaker::install::ensure in JSON backend", > "Debug: Resource package[pacemaker] was not determined to be defined", > "Debug: Create new resource package[pacemaker] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pcs] was not determined to be defined", > "Debug: Create new resource package[pcs] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[fence-agents-all] was not determined to be defined", > "Debug: Create new resource package[fence-agents-all] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pacemaker-libs] was not determined to be defined", > "Debug: Create new resource package[pacemaker-libs] with params {\"ensure\"=>\"present\"}", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/service.pp' in environment production", > "Debug: Automatically imported pacemaker::service from pacemaker/service into production", > "Debug: hiera(): Looking up pacemaker::service::ensure in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasstatus in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasrestart in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/corosync.pp' in environment production", > "Debug: Automatically imported pacemaker::corosync from pacemaker/corosync into production", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_members_rrp in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_name in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::manage_fw in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::pcsd_debug in JSON backend", > "Debug: template[inline]: Bound template variables for inline template in 0.00 seconds", > "Debug: template[inline]: Interpolated template inline template in 0.00 seconds", > "Debug: hiera(): Looking up docker_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/systemd/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/systemctl/daemon_reload.pp' in environment production", > "Debug: Automatically imported systemd::systemctl::daemon_reload from systemd/systemctl/daemon_reload into production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/unit_file.pp' in environment production", > "Debug: importing '/etc/puppet/modules/stdlib/manifests/init.pp' in environment production", > "Debug: Automatically imported systemd::unit_file from systemd/unit_file into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/stonith.pp' in environment production", > "Debug: Automatically imported pacemaker::stonith from pacemaker/stonith into production", > "Debug: hiera(): Looking up pacemaker::stonith::try_sleep in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/property.pp' in environment production", > "Debug: Automatically imported pacemaker::property from pacemaker/property into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource_defaults.pp' in environment production", > "Debug: Automatically imported pacemaker::resource_defaults from pacemaker/resource_defaults into production", > "Debug: hiera(): Looking up pacemaker::resource_defaults::defaults in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::post_success_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::verify_on_create in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::ensure in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/rabbitmq_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::rabbitmq_bundle from tripleo/profile/pacemaker/rabbitmq_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rabbitmq_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rabbitmq_docker_control_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::erlang_cookie in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::user_ha_queues in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rpc_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rpc_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::rpc_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::notify_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::notify_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::notify_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::rabbitmq_bundle::control_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::erlang_cookie in JSON backend", > "Debug: hiera(): Looking up rabbitmq::nr_ha_queues in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_scheme in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_node_names in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_scheme in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_node_names in JSON backend", > "Debug: hiera(): Looking up enable_internal_tls in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/rabbitmq.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::rabbitmq from tripleo/profile/base/rabbitmq into production", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::certificate_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::config_variables in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::environment in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::ssl_versions in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::inter_node_ciphers in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::inet_dist_interface in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::ipv6 in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::kernel_variables in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rpc_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rpc_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rpc_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::notify_scheme in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::notify_nodes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::notify_bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rabbitmq_pass in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::rabbitmq_user in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::stack_action in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::rabbitmq::step in JSON backend", > "Debug: hiera(): Looking up rabbitmq_config_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq_environment in JSON backend", > "Debug: hiera(): Looking up rabbitmq::interface in JSON backend", > "Debug: hiera(): Looking up internal_api in JSON backend", > "Debug: hiera(): Looking up rabbit_ipv6 in JSON backend", > "Debug: hiera(): Looking up rabbitmq_kernel_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::default_pass in JSON backend", > "Debug: hiera(): Looking up rabbitmq::default_user in JSON backend", > "Debug: hiera(): Looking up stack_action in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service_manage in JSON backend", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/init.pp' in environment production", > "Debug: Automatically imported rabbitmq from rabbitmq into production", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/params.pp' in environment production", > "Debug: Automatically imported rabbitmq::params from rabbitmq/params into production", > "Debug: hiera(): Looking up rabbitmq::admin_enable in JSON backend", > "Debug: hiera(): Looking up rabbitmq::cluster_node_type in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_path in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_ranch in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_stomp in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_shovel in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_shovel_statics in JSON backend", > "Debug: hiera(): Looking up rabbitmq::delete_guest_user in JSON backend", > "Debug: hiera(): Looking up rabbitmq::env_config in JSON backend", > "Debug: hiera(): Looking up rabbitmq::env_config_path in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_ip_address in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_ssl in JSON backend", > "Debug: hiera(): Looking up rabbitmq::management_hostname in JSON backend", > "Debug: hiera(): Looking up rabbitmq::node_ip_address in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_apt_pin in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_gpg_key in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_name in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_source in JSON backend", > "Debug: hiera(): Looking up rabbitmq::package_provider in JSON backend", > "Debug: hiera(): Looking up rabbitmq::repos_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::manage_python in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmq_user in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmq_group in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmq_home in JSON backend", > "Debug: hiera(): Looking up rabbitmq::port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_keepalive in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_backlog in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_sndbuf in JSON backend", > "Debug: hiera(): Looking up rabbitmq::tcp_recbuf in JSON backend", > "Debug: hiera(): Looking up rabbitmq::heartbeat in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service_name in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_only in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_cacert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_cert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_key in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_depth in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_cert_password in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_interface in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_management_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_stomp_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_verify in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_fail_if_no_peer_cert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_management_verify in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_management_fail_if_no_peer_cert in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_versions in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_secure_renegotiate in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_reuse_sessions in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_honor_cipher_order in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_dhfile in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_ciphers in JSON backend", > "Debug: hiera(): Looking up rabbitmq::stomp_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_auth in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_server in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_user_dn_pattern in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_other_bind in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_use_ssl in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_log in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ldap_config_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::stomp_port in JSON backend", > "Debug: hiera(): Looking up rabbitmq::stomp_ssl_only in JSON backend", > "Debug: hiera(): Looking up rabbitmq::wipe_db_on_cookie_change in JSON backend", > "Debug: hiera(): Looking up rabbitmq::cluster_partition_handling in JSON backend", > "Debug: hiera(): Looking up rabbitmq::file_limit in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_management_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::config_additional_variables in JSON backend", > "Debug: hiera(): Looking up rabbitmq::auth_backends in JSON backend", > "Debug: hiera(): Looking up rabbitmq::key_content in JSON backend", > "Debug: hiera(): Looking up rabbitmq::collect_statistics_interval in JSON backend", > "Debug: hiera(): Looking up rabbitmq::inetrc_config in JSON backend", > "Debug: hiera(): Looking up rabbitmq::inetrc_config_path in JSON backend", > "Debug: hiera(): Looking up rabbitmq::ssl_erl_dist in JSON backend", > "Debug: hiera(): Looking up rabbitmq::rabbitmqadmin_package in JSON backend", > "Debug: hiera(): Looking up rabbitmq::archive_options in JSON backend", > "Debug: hiera(): Looking up rabbitmq::loopback_users in JSON backend", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/install.pp' in environment production", > "Debug: Automatically imported rabbitmq::install from rabbitmq/install into production", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/config.pp' in environment production", > "Debug: Automatically imported rabbitmq::config from rabbitmq/config into production", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmq.config.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmq.config.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmq-env.conf.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmq-env.conf.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/inetrc.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/inetrc.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/inetrc.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/inetrc.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/inetrc.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmqadmin.conf.erb", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmqadmin.conf.erb in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/rabbitmq-server.service.d/limits.conf", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf]: Interpolated template /etc/puppet/modules/rabbitmq/templates/rabbitmq-server.service.d/limits.conf in 0.00 seconds", > "Debug: Scope(Class[Rabbitmq::Config]): Retrieving template rabbitmq/limits.conf", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/limits.conf]: Bound template variables for /etc/puppet/modules/rabbitmq/templates/limits.conf in 0.00 seconds", > "Debug: template[/etc/puppet/modules/rabbitmq/templates/limits.conf]: Interpolated template /etc/puppet/modules/rabbitmq/templates/limits.conf in 0.00 seconds", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/service.pp' in environment production", > "Debug: Automatically imported rabbitmq::service from rabbitmq/service into production", > "Debug: hiera(): Looking up rabbitmq::service::service_ensure in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service::service_manage in JSON backend", > "Debug: hiera(): Looking up rabbitmq::service::service_name in JSON backend", > "Debug: importing '/etc/puppet/modules/rabbitmq/manifests/management.pp' in environment production", > "Debug: Automatically imported rabbitmq::management from rabbitmq/management into production", > "Debug: hiera(): Looking up veritas_hyperscale_controller_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/pacemaker/resource_restart_flag.pp' in environment production", > "Debug: Automatically imported tripleo::pacemaker::resource_restart_flag from tripleo/pacemaker/resource_restart_flag into production", > "Debug: hiera(): Looking up oslo_messaging_rpc_short_node_names in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource/bundle.pp' in environment production", > "Debug: Automatically imported pacemaker::resource::bundle from pacemaker/resource/bundle into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource/ocf.pp' in environment production", > "Debug: Automatically imported pacemaker::resource::ocf from pacemaker/resource/ocf into production", > "Debug: hiera(): Looking up systemd::service_limits in JSON backend", > "Debug: hiera(): Looking up systemd::manage_resolved in JSON backend", > "Debug: hiera(): Looking up systemd::resolved_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_networkd in JSON backend", > "Debug: hiera(): Looking up systemd::networkd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_timesyncd in JSON backend", > "Debug: hiera(): Looking up systemd::timesyncd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::ntp_server in JSON backend", > "Debug: hiera(): Looking up systemd::fallback_ntp_server in JSON backend", > "Debug: Resource file[/var/lib/tripleo] was not determined to be defined", > "Debug: Create new resource file[/var/lib/tripleo] with params {\"ensure\"=>\"directory\", \"owner\"=>\"root\", \"mode\"=>\"0755\", \"group\"=>\"root\"}", > "Debug: Resource file[/var/lib/tripleo/pacemaker-restarts] was not determined to be defined", > "Debug: Create new resource file[/var/lib/tripleo/pacemaker-restarts] with params {\"ensure\"=>\"directory\", \"owner\"=>\"root\", \"mode\"=>\"0755\", \"group\"=>\"root\"}", > "Debug: hiera(): Looking up pacemaker::resource::bundle::deep_compare in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource::ocf::deep_compare in JSON backend", > "Debug: Adding relationship from Service[pcsd] to Exec[auth-successful-across-all-nodes] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[Create Cluster tripleo_cluster] to Exec[Start Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[Start Cluster tripleo_cluster] to Service[corosync] with 'before'", > "Debug: Adding relationship from Exec[Start Cluster tripleo_cluster] to Service[pacemaker] with 'before'", > "Debug: Adding relationship from Service[corosync] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Service[pacemaker] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker-authkey] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[rabbitmq] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property--stonith-enabled] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-rabbitmq-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[rabbitmq-bundle] with 'before'", > "Debug: Adding relationship from Class[Pacemaker] to Class[Pacemaker::Corosync] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/resource-agents-deps.target.wants] to Systemd::Unit_file[docker.service] with 'before'", > "Debug: Adding relationship from Systemd::Unit_file[docker.service] to Class[Systemd::Systemctl::Daemon_reload] with 'notify'", > "Debug: Adding relationship from File[/etc/systemd/system/rabbitmq-server.service.d] to File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf] with 'before'", > "Debug: Adding relationship from Class[Rabbitmq::Install] to Class[Rabbitmq::Config] with 'before'", > "Debug: Adding relationship from Class[Rabbitmq::Config] to Class[Rabbitmq::Service] with 'notify'", > "Debug: Adding relationship from Class[Rabbitmq::Service] to Class[Rabbitmq::Management] with 'before'", > "Debug: Adding relationship from Exec[rabbitmq-ready] to Rabbitmq_user[guest] with 'before'", > "Debug: Adding relationship from File[/var/lib/tripleo/pacemaker-restarts] to Exec[rabbitmq-clone resource restart flag] with 'before'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.60 seconds", > "Debug: puppet-pacemaker: initialize()", > "Debug: Creating default schedules", > "Info: Applying configuration version '1529673503'", > "Debug: /Stage[main]/Pacemaker/before: subscribes to Class[Pacemaker::Corosync]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Exec[auth-successful-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/before: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/notify: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/notify: subscribes to Exec[reauthenticate-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/require: subscribes to User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/before: subscribes to Exec[Start Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/require: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/before: subscribes to Service[corosync]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/before: subscribes to Service[pacemaker]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[rabbitmq]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property--stonith-enabled]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-rabbitmq-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[rabbitmq-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/before: subscribes to Systemd::Unit_file[docker.service]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/before: subscribes to Class[Pacemaker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Rabbitmq::Install/before: subscribes to Class[Rabbitmq::Config]", > "Debug: /Stage[main]/Rabbitmq::Install/Package[rabbitmq-server]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/require: subscribes to File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/before: subscribes to File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/notify: subscribes to Exec[rabbitmq-systemd-reload]", > "Debug: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]/before: subscribes to File[rabbitmq.config]", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]/notify: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Service/before: subscribes to Class[Rabbitmq::Management]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]/require: subscribes to Class[Rabbitmq]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone]/subscribe: subscribes to Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Property[rabbitmq-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[rabbitmq-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/require: subscribes to Class[Rabbitmq]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/require: subscribes to Pacemaker::Resource::Bundle[rabbitmq-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/before: subscribes to Exec[rabbitmq-ready]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/before: subscribes to Rabbitmq_user[guest]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone]/File[/var/lib/tripleo/pacemaker-restarts]/before: subscribes to Exec[rabbitmq-clone resource restart flag]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]: Adding autorequire relationship with File[/etc/rabbitmq]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Adding autorequire relationship with File[/etc/systemd/system/resource-agents-deps.target.wants]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone]/File[/var/lib/tripleo/pacemaker-restarts]: Adding autorequire relationship with File[/var/lib/tripleo]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Stage[main]: Resource is being skipped, unscheduling all events", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Settings]: Resource is being skipped, unscheduling all events", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Main]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Install]: Resource is being skipped, unscheduling all events", > "Debug: Prefetching yum resources for package", > "Debug: Executing '/usr/bin/rpm -qa --nosignature --nodigest --qf '%{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n''", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Service]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]: The container Class[Tripleo::Profile::Base::Pacemaker] will propagate my refresh event", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Systemd::Unit_file[docker.service]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Stonith]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Property[Disable STONITH]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Resource_defaults]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Rabbitmq]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Tripleo::Profile::Base::Rabbitmq]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Rabbitmq::Install/Package[rabbitmq-server]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Rabbitmq::Install/Package[rabbitmq-server]: Resource is being skipped, unscheduling all events", > "Debug: Class[Rabbitmq::Config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Config]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/group: group changed 'rabbitmq' to 'root'", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{md5}b126e4b8423a26246952d34c225c6fdd'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]: Scheduling refresh of Class[Rabbitmq::Service]", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{md5}12f8d1a1f9f57f23c1be6c7bf2286e73'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]: Scheduling refresh of Class[Rabbitmq::Service]", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/ensure: defined content as '{md5}44d4ef5cb86ab30e6127e83939ef09c4'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/ensure: defined content as '{md5}91d370d2c5a1af171c9d5b5985fca733'", > "Info: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]: Scheduling refresh of Exec[rabbitmq-systemd-reload]", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Debug: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]: Resource is being skipped, unscheduling all events", > "Info: /Stage[main]/Rabbitmq::Config/Exec[rabbitmq-systemd-reload]: Unscheduling all events on Exec[rabbitmq-systemd-reload]", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}1030abc4db405b5f2969643e99bc7435'", > "Debug: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]: Scheduling refresh of Class[Rabbitmq::Service]", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Rabbitmq::Config/Rabbitmq_erlang_cookie[/var/lib/rabbitmq/.erlang.cookie]: Resource is being skipped, unscheduling all events", > "Info: Computing checksum on file /etc/rabbitmq/rabbitmq.config", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: Filebucketed /etc/rabbitmq/rabbitmq.config to puppet with sum b346ec0a8320f85f795bf612f6b02da7", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/content: content changed '{md5}b346ec0a8320f85f795bf612f6b02da7' to '{md5}1e1a80b34927c980a0411cf7e41d2054'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/mode: mode changed '0644' to '0640'", > "Debug: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: The container Class[Rabbitmq::Config] will propagate my refresh event", > "Info: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]: Scheduling refresh of Class[Rabbitmq::Service]", > "Info: Class[Rabbitmq::Config]: Unscheduling all events on Class[Rabbitmq::Config]", > "Debug: Class[Rabbitmq::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Service]: Resource is being skipped, unscheduling all events", > "Info: Class[Rabbitmq::Service]: Unscheduling all events on Class[Rabbitmq::Service]", > "Debug: Class[Rabbitmq::Management]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Rabbitmq::Management]: Resource is being skipped, unscheduling all events", > "Info: Computing checksum on file /var/lib/rabbitmq/.erlang.cookie", > "Info: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]: Filebucketed /var/lib/rabbitmq/.erlang.cookie to puppet with sum 8f7bb17b28ae360965eae4f83a06e6cc", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]/content: content changed '{md5}8f7bb17b28ae360965eae4f83a06e6cc' to '{md5}245bbd9711e99c5bfac6fc7dc7b1767b'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]: The container Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle] will propagate my refresh event", > "Debug: Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[rabbitmq-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Property[rabbitmq-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Systemd]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/mode: Not managing symlink mode", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Info: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Scheduling refresh of Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: The container Systemd::Unit_file[docker.service] will propagate my refresh event", > "Info: Systemd::Unit_file[docker.service]: Unscheduling all events on Systemd::Unit_file[docker.service]", > "Info: Class[Tripleo::Profile::Base::Pacemaker]: Unscheduling all events on Class[Tripleo::Profile::Base::Pacemaker]", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Pacemaker::Corosync]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/ensure: created", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/ensure: defined content as '{md5}a839b1ab3552f629efbcc7aaf42e7964'", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Resource is being skipped, unscheduling all events", > "Info: Class[Pacemaker::Corosync]: Unscheduling all events on Class[Pacemaker::Corosync]", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Resource is being skipped, unscheduling all events", > "Info: Class[Systemd::Systemctl::Daemon_reload]: Unscheduling all events on Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-v6xyit returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-v6xyit property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: property exists: property show | grep stonith-enabled | grep false > /dev/null 2>&1 -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone]/File[/var/lib/tripleo]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone]/File[/var/lib/tripleo]: The container Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone]/File[/var/lib/tripleo/pacemaker-restarts]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone]/File[/var/lib/tripleo/pacemaker-restarts]: The container Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone] will propagate my refresh event", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone]/Exec[rabbitmq-clone resource restart flag]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone]/Exec[rabbitmq-clone resource restart flag]: Resource is being skipped, unscheduling all events", > "Info: Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone]: Unscheduling all events on Tripleo::Pacemaker::Resource_restart_flag[rabbitmq-clone]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1qz1p69 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1qz1p69 property show | grep rabbitmq-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep rabbitmq-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1nuplde returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1nuplde property set --node controller-0 rabbitmq-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1nuplde diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1nuplde.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 rabbitmq-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Property[rabbitmq-role-controller-0]/Pcmk_property[property-controller-0-rabbitmq-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Property[rabbitmq-role-controller-0]/Pcmk_property[property-controller-0-rabbitmq-role]: The container Pacemaker::Property[rabbitmq-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[rabbitmq-role-controller-0]: Unscheduling all events on Pacemaker::Property[rabbitmq-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[rabbitmq-bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Resource::Bundle[rabbitmq-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1bvrshd returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1bvrshd constraint list | grep location-rabbitmq-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-ivb0vg returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-ivb0vg resource show rabbitmq-bundle > /dev/null 2>&1", > "Debug: Exists: bundle rabbitmq-bundle exists 1 location exists 1 deep_compare: false", > "Debug: Create: resource exists 1 location exists 1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1i9olzq returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1i9olzq resource bundle create rabbitmq-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest replicas=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=rabbitmq-cfg-files source-dir=/var/lib/kolla/config_files/rabbitmq.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=rabbitmq-cfg-data source-dir=/var/lib/config-data/puppet-generated/rabbitmq/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=rabbitmq-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=rabbitmq-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=rabbitmq-lib source-dir=/var/lib/rabbitmq target-dir=/var/lib/rabbitmq options=rw storage-map id=rabbitmq-pki-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=rabbitmq-pki-ca-bundle-crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=rabbitmq-pki-ca-bundle-trust-crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=rabbitmq-pki-cert source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=rabbitmq-log source-dir=/var/log/containers/rabbitmq target-dir=/var/log/rabbitmq options=rw storage-map id=rabbitmq-dev-log source-dir=/dev/log target-dir=/dev/log options=rw network control-port=3122 --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1i9olzq diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1i9olzq.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location rabbitmq-bundle rule resource-discovery=exclusive score=0 rabbitmq-role eq true", > "Debug: location_rule_create: constraint location rabbitmq-bundle rule resource-discovery=exclusive score=0 rabbitmq-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-4rf7z4 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-4rf7z4 constraint location rabbitmq-bundle rule resource-discovery=exclusive score=0 rabbitmq-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-4rf7z4 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-4rf7z4.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-gstwcv returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-gstwcv resource enable rabbitmq-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-gstwcv diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-gstwcv.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Bundle[rabbitmq-bundle]/Pcmk_bundle[rabbitmq-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Bundle[rabbitmq-bundle]/Pcmk_bundle[rabbitmq-bundle]: The container Pacemaker::Resource::Bundle[rabbitmq-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[rabbitmq-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[rabbitmq-bundle]", > "Debug: Pacemaker::Resource::Ocf[rabbitmq]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: Pacemaker::Resource::Ocf[rabbitmq]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-lkw78s returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-lkw78s constraint list | grep location-rabbitmq-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-reb3wm returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-reb3wm resource show rabbitmq > /dev/null 2>&1", > "Debug: Exists: resource rabbitmq exists 1 location exists 0 resource deep_compare: false", > "Debug: Create: resource exists 1 location exists 0", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1cvud5m returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1cvud5m resource create rabbitmq ocf:heartbeat:rabbitmq-cluster set_policy='ha-all ^(?!amq\\.).* {\"ha-mode\":\"all\"}' meta notify=true container-attribute-target=host op start timeout=200s stop timeout=200s bundle rabbitmq-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1cvud5m diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1cvud5m.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/Pcmk_resource[rabbitmq]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Pacemaker::Resource::Ocf[rabbitmq]/Pcmk_resource[rabbitmq]: The container Pacemaker::Resource::Ocf[rabbitmq] will propagate my refresh event", > "Info: Pacemaker::Resource::Ocf[rabbitmq]: Unscheduling all events on Pacemaker::Resource::Ocf[rabbitmq]", > "Debug: Exec[rabbitmq-ready](provider=posix): Executing check 'rabbitmqctl status | grep -F \"{rabbit,\"'", > "Debug: Executing: 'rabbitmqctl status | grep -F \"{rabbit,\"'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: Error: Failed to initialize erlang distribution: {{shutdown,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {failed_to_start_child,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: net_kernel,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {'EXIT',nodistribution}}},", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {child,undefined,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: net_sup_dynamic,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: {erl_distribution,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: start_link,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: [['rabbitmq-cli-08',", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: shortnames]]},", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: permanent,1000,supervisor,", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/unless: [erl_distribution]}}.", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Exec try 1/180", > "Debug: Exec[rabbitmq-ready](provider=posix): Executing 'rabbitmqctl status | grep -F \"{rabbit,\"'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Sleeping for 10 seconds between tries", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Exec try 2/180", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: Exec try 3/180", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]/returns: executed successfully", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/Exec[rabbitmq-ready]: The container Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle] will propagate my refresh event", > "Info: Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]: Unscheduling all events on Class[Tripleo::Profile::Pacemaker::Rabbitmq_bundle]", > "Debug: Prefetching rabbitmqctl resources for rabbitmq_user", > "Debug: Executing: '/usr/sbin/rabbitmqctl -q list_users'", > "Debug: Command succeeded", > "Debug: Executing: '/usr/sbin/rabbitmqctl eval rabbit_access_control:check_user_pass_login(list_to_binary(\"guest\"), list_to_binary(\"YmTqk7aXaBM0jJVYmOszouRa7\")).'", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[puppet]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[hourly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[daily]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[weekly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[monthly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Schedule[never]: Resource is being skipped, unscheduling all events", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, rabbitmq_policy, rabbitmq_user, rabbitmq_ready", > "Debug: /Filebucket[puppet]: Resource is being skipped, unscheduling all events", > "Debug: Finishing transaction 44156200", > "Debug: Storing state", > "Info: Creating state file /var/lib/puppet/state/state.yaml", > "Debug: Stored state in 0.00 seconds", > "Notice: Applied catalog in 63.58 seconds", > "Changes:", > " Total: 23", > "Events:", > " Success: 23", > "Resources:", > " Changed: 20", > " Out of sync: 20", > " Skipped: 26", > " Total: 49", > "Time:", > " File line: 0.00", > " File: 0.05", > " Rabbitmq user: 1.48", > " Config retrieval: 1.76", > " Last run: 1529673568", > " Pcmk bundle: 16.67", > " Exec: 25.63", > " Total: 63.66", > " Pcmk property: 8.64", > " Pcmk resource: 9.44", > "Version:", > " Config: 1529673503", > " Puppet: 4.8.2", > "Debug: Applying settings catalog for sections main, reporting, metrics", > "Debug: Using settings: adding file resource 'confdir': 'File[/etc/puppet]{:path=>\"/etc/puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'vardir': 'File[/var/lib/puppet]{:path=>\"/var/lib/puppet\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'logdir': 'File[/var/log/puppet]{:path=>\"/var/log/puppet\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'statedir': 'File[/var/lib/puppet/state]{:path=>\"/var/lib/puppet/state\", :mode=>\"1755\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'rundir': 'File[/var/run/puppet]{:path=>\"/var/run/puppet\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'libdir': 'File[/var/lib/puppet/lib]{:path=>\"/var/lib/puppet/lib\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'hiera_config': 'File[/etc/puppet/hiera.yaml]{:path=>\"/etc/puppet/hiera.yaml\", :ensure=>:file, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'preview_outputdir': 'File[/var/lib/puppet/preview]{:path=>\"/var/lib/puppet/preview\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'certdir': 'File[/etc/puppet/ssl/certs]{:path=>\"/etc/puppet/ssl/certs\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'ssldir': 'File[/etc/puppet/ssl]{:path=>\"/etc/puppet/ssl\", :mode=>\"771\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'publickeydir': 'File[/etc/puppet/ssl/public_keys]{:path=>\"/etc/puppet/ssl/public_keys\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'requestdir': 'File[/etc/puppet/ssl/certificate_requests]{:path=>\"/etc/puppet/ssl/certificate_requests\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'privatekeydir': 'File[/etc/puppet/ssl/private_keys]{:path=>\"/etc/puppet/ssl/private_keys\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'privatedir': 'File[/etc/puppet/ssl/private]{:path=>\"/etc/puppet/ssl/private\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'pluginfactdest': 'File[/var/lib/puppet/facts.d]{:path=>\"/var/lib/puppet/facts.d\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: /File[/var/lib/puppet/state]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/var/lib/puppet/lib]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/etc/puppet/hiera.yaml]: Adding autorequire relationship with File[/etc/puppet]", > "Debug: /File[/var/lib/puppet/preview]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/etc/puppet/ssl/certs]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl]: Adding autorequire relationship with File[/etc/puppet]", > "Debug: /File[/etc/puppet/ssl/public_keys]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/certificate_requests]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/private_keys]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/private]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/var/lib/puppet/facts.d]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: Finishing transaction 41408560", > "Debug: Received report to process from controller-0.localdomain", > "Debug: Processing report from controller-0.localdomain with processor Puppet::Reports::Store", > "stderr: + STEP=2", > "+ TAGS=file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle'", > "+ EXTRA_ARGS=--debug", > "+ '[' -d /tmp/puppet-etc ']'", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ echo '{\"step\": 2}'", > "+ export FACTER_uuid=docker", > "+ FACTER_uuid=docker", > "+ set +e", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle'", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='rabbitmq_nodename', resolution='<anonymous>': undefined method `[]' for nil:NilClass", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'rabbitmq' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rc=2", > "+ set -e", > "+ set +ux", > "Debug: Facter: value for erl_ssl_path is still nil", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::database::mysql_bundle from tripleo/profile/pacemaker/database/mysql_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::mysql_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::control_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::bind_address in JSON backend", > "Debug: hiera(): Looking up fqdn_internal_api in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::ca_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::cipher_list in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::gcomm_cipher in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::certificate_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::gmcast_listen_addr in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::innodb_flush_log_at_trx_commit in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::sst_tls_cipher in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::sst_tls_options in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::ipv6 in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::mysql_bundle::step in JSON backend", > "Debug: hiera(): Looking up mysql_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::certificate_specs in JSON backend", > "Debug: hiera(): Looking up mysql_bind_host in JSON backend", > "Debug: hiera(): Looking up innodb_flush_log_at_trx_commit in JSON backend", > "Debug: hiera(): Looking up mysql_ipv6 in JSON backend", > "Debug: hiera(): Looking up mysql_short_node_names in JSON backend", > "Debug: hiera(): Looking up mysql_node_names in JSON backend", > "Debug: hiera(): Looking up mysql_max_connections in JSON backend", > "Debug: hiera(): Looking up mysql::server::root_password in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::database::mysql from tripleo/profile/base/database/mysql into production", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::bind_address in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::generate_dropin_file_limit in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::innodb_buffer_pool_size in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::mysql_max_connections in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::database::mysql::step in JSON backend", > "Debug: hiera(): Looking up innodb_buffer_pool_size in JSON backend", > "Debug: hiera(): Looking up enable_galera in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server.pp' in environment production", > "Debug: Automatically imported mysql::server from mysql/server into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/params.pp' in environment production", > "Debug: Automatically imported mysql::params from mysql/params into production", > "Debug: hiera(): Looking up mysql::server::includedir in JSON backend", > "Debug: hiera(): Looking up mysql::server::install_options in JSON backend", > "Debug: hiera(): Looking up mysql::server::install_secret_file in JSON backend", > "Debug: hiera(): Looking up mysql::server::manage_config_file in JSON backend", > "Debug: hiera(): Looking up mysql::server::package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::server::package_manage in JSON backend", > "Debug: hiera(): Looking up mysql::server::package_name in JSON backend", > "Debug: hiera(): Looking up mysql::server::purge_conf_dir in JSON backend", > "Debug: hiera(): Looking up mysql::server::restart in JSON backend", > "Debug: hiera(): Looking up mysql::server::root_group in JSON backend", > "Debug: hiera(): Looking up mysql::server::mysql_group in JSON backend", > "Debug: hiera(): Looking up mysql::server::service_name in JSON backend", > "Debug: hiera(): Looking up mysql::server::service_provider in JSON backend", > "Debug: hiera(): Looking up mysql::server::users in JSON backend", > "Debug: hiera(): Looking up mysql::server::grants in JSON backend", > "Debug: hiera(): Looking up mysql::server::databases in JSON backend", > "Debug: hiera(): Looking up mysql::server::enabled in JSON backend", > "Debug: hiera(): Looking up mysql::server::manage_service in JSON backend", > "Debug: hiera(): Looking up mysql::server::old_root_password in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/db.pp' in environment production", > "Debug: Automatically imported mysql::db from mysql/db into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/config.pp' in environment production", > "Debug: Automatically imported mysql::server::config from mysql/server/config into production", > "Debug: Scope(Class[Mysql::Server::Config]): Retrieving template mysql/my.cnf.erb", > "Debug: template[/etc/puppet/modules/mysql/templates/my.cnf.erb]: Bound template variables for /etc/puppet/modules/mysql/templates/my.cnf.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/mysql/templates/my.cnf.erb]: Interpolated template /etc/puppet/modules/mysql/templates/my.cnf.erb in 0.00 seconds", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/install.pp' in environment production", > "Debug: Automatically imported mysql::server::install from mysql/server/install into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/binarylog.pp' in environment production", > "Debug: Automatically imported mysql::server::binarylog from mysql/server/binarylog into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/installdb.pp' in environment production", > "Debug: Automatically imported mysql::server::installdb from mysql/server/installdb into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/service.pp' in environment production", > "Debug: Automatically imported mysql::server::service from mysql/server/service into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/root_password.pp' in environment production", > "Debug: Automatically imported mysql::server::root_password from mysql/server/root_password into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/providers.pp' in environment production", > "Debug: Automatically imported mysql::server::providers from mysql/server/providers into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/server/account_security.pp' in environment production", > "Debug: Automatically imported mysql::server::account_security from mysql/server/account_security into production", > "Debug: hiera(): Looking up aodh_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/aodh/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/aodh/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported aodh::db::mysql from aodh/db/mysql into production", > "Debug: hiera(): Looking up aodh::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up aodh::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/aodh/manifests/deps.pp' in environment production", > "Debug: Automatically imported aodh::deps from aodh/deps into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/db.pp' in environment production", > "Debug: Automatically imported oslo::db from oslo/db into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/policy/base.pp' in environment production", > "Debug: Automatically imported openstacklib::policy::base from openstacklib/policy/base into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported openstacklib::db::mysql from openstacklib/db/mysql into production", > "Debug: hiera(): Looking up ceilometer_collector_enabled in JSON backend", > "Debug: hiera(): Looking up cinder_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/cinder/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported cinder::db::mysql from cinder/db/mysql into production", > "Debug: hiera(): Looking up cinder::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up cinder::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/deps.pp' in environment production", > "Debug: Automatically imported cinder::deps from cinder/deps into production", > "Debug: hiera(): Looking up barbican_api_enabled in JSON backend", > "Debug: hiera(): Looking up congress_enabled in JSON backend", > "Debug: hiera(): Looking up designate_api_enabled in JSON backend", > "Debug: hiera(): Looking up glance_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/glance/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/glance/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported glance::db::mysql from glance/db/mysql into production", > "Debug: hiera(): Looking up glance::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up glance::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/glance/manifests/deps.pp' in environment production", > "Debug: Automatically imported glance::deps from glance/deps into production", > "Debug: hiera(): Looking up gnocchi_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/gnocchi/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/gnocchi/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported gnocchi::db::mysql from gnocchi/db/mysql into production", > "Debug: hiera(): Looking up gnocchi::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up gnocchi::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/gnocchi/manifests/deps.pp' in environment production", > "Debug: Automatically imported gnocchi::deps from gnocchi/deps into production", > "Debug: hiera(): Looking up heat_engine_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/heat/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/heat/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported heat::db::mysql from heat/db/mysql into production", > "Debug: hiera(): Looking up heat::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up heat::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/heat/manifests/deps.pp' in environment production", > "Debug: Automatically imported heat::deps from heat/deps into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/cache.pp' in environment production", > "Debug: Automatically imported oslo::cache from oslo/cache into production", > "Debug: hiera(): Looking up ironic_api_enabled in JSON backend", > "Debug: hiera(): Looking up ironic_inspector_enabled in JSON backend", > "Debug: hiera(): Looking up keystone_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/keystone/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/keystone/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported keystone::db::mysql from keystone/db/mysql into production", > "Debug: hiera(): Looking up keystone::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up keystone::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/keystone/manifests/deps.pp' in environment production", > "Debug: Automatically imported keystone::deps from keystone/deps into production", > "Debug: hiera(): Looking up manila_api_enabled in JSON backend", > "Debug: hiera(): Looking up mistral_api_enabled in JSON backend", > "Debug: hiera(): Looking up neutron_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/neutron/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/neutron/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported neutron::db::mysql from neutron/db/mysql into production", > "Debug: hiera(): Looking up neutron::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up neutron::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/neutron/manifests/deps.pp' in environment production", > "Debug: Automatically imported neutron::deps from neutron/deps into production", > "Debug: hiera(): Looking up nova_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/nova/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/nova/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported nova::db::mysql from nova/db/mysql into production", > "Debug: hiera(): Looking up nova::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql::setup_cell0 in JSON backend", > "Debug: importing '/etc/puppet/modules/nova/manifests/deps.pp' in environment production", > "Debug: Automatically imported nova::deps from nova/deps into production", > "Debug: importing '/etc/puppet/modules/nova/manifests/db/mysql_api.pp' in environment production", > "Debug: Automatically imported nova::db::mysql_api from nova/db/mysql_api into production", > "Debug: hiera(): Looking up nova::db::mysql_api::password in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::dbname in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::user in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::host in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::charset in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::collate in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_api::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up nova_placement_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/nova/manifests/db/mysql_placement.pp' in environment production", > "Debug: Automatically imported nova::db::mysql_placement from nova/db/mysql_placement into production", > "Debug: hiera(): Looking up nova::db::mysql_placement::password in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::dbname in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::user in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::host in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::charset in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::collate in JSON backend", > "Debug: hiera(): Looking up nova::db::mysql_placement::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up octavia_api_enabled in JSON backend", > "Debug: hiera(): Looking up sahara_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/sahara/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/sahara/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported sahara::db::mysql from sahara/db/mysql into production", > "Debug: hiera(): Looking up sahara::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::allowed_hosts in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up sahara::db::mysql::collate in JSON backend", > "Debug: importing '/etc/puppet/modules/sahara/manifests/deps.pp' in environment production", > "Debug: Automatically imported sahara::deps from sahara/deps into production", > "Debug: hiera(): Looking up tacker_enabled in JSON backend", > "Debug: hiera(): Looking up trove_api_enabled in JSON backend", > "Debug: hiera(): Looking up panko_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/panko/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/panko/manifests/db/mysql.pp' in environment production", > "Debug: Automatically imported panko::db::mysql from panko/db/mysql into production", > "Debug: hiera(): Looking up panko::db::mysql::password in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::dbname in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::user in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::host in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::charset in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::collate in JSON backend", > "Debug: hiera(): Looking up panko::db::mysql::allowed_hosts in JSON backend", > "Debug: importing '/etc/puppet/modules/panko/manifests/deps.pp' in environment production", > "Debug: Automatically imported panko::deps from panko/deps into production", > "Debug: hiera(): Looking up ec2_api_enabled in JSON backend", > "Debug: hiera(): Looking up zaqar_api_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/client.pp' in environment production", > "Debug: Automatically imported mysql::client from mysql/client into production", > "Debug: hiera(): Looking up mysql::client::bindings_enable in JSON backend", > "Debug: hiera(): Looking up mysql::client::install_options in JSON backend", > "Debug: hiera(): Looking up mysql::client::package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::client::package_manage in JSON backend", > "Debug: hiera(): Looking up mysql::client::package_name in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/client/install.pp' in environment production", > "Debug: Automatically imported mysql::client::install from mysql/client/install into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp' in environment production", > "Debug: Automatically imported openstacklib::db::mysql::host_access from openstacklib/db/mysql/host_access into production", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[galera] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-galera-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[galera-bundle] with 'before'", > "Debug: Adding relationship from Anchor[mysql::server::start] to Class[Mysql::Server::Install] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Install] to Class[Mysql::Server::Config] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Config] to Class[Mysql::Server::Binarylog] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Binarylog] to Class[Mysql::Server::Installdb] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Installdb] to Class[Mysql::Server::Service] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Service] to Class[Mysql::Server::Root_password] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Root_password] to Class[Mysql::Server::Providers] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server::Providers] to Anchor[mysql::server::end] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from Anchor[aodh::install::end] to Anchor[aodh::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[aodh::config::end] to Anchor[aodh::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[aodh::db::begin] to Anchor[aodh::db::end] with 'before'", > "Debug: Adding relationship from Anchor[aodh::db::end] to Anchor[aodh::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::dbsync::begin] to Anchor[aodh::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[aodh::dbsync::end] to Anchor[aodh::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::install::end] to Anchor[aodh::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::config::end] to Anchor[aodh::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[aodh::db::begin] to Class[Aodh::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Aodh::Db::Mysql] to Anchor[aodh::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::install::end] to Anchor[cinder::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::end] to Anchor[cinder::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[cinder::db::begin] to Anchor[cinder::db::end] with 'before'", > "Debug: Adding relationship from Anchor[cinder::db::end] to Anchor[cinder::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::dbsync::begin] to Anchor[cinder::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[cinder::dbsync::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::install::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::config::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::db::begin] to Class[Cinder::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Cinder::Db::Mysql] to Anchor[cinder::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[glance::install::end] to Anchor[glance::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[glance::config::end] to Anchor[glance::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[glance::db::begin] to Anchor[glance::db::end] with 'before'", > "Debug: Adding relationship from Anchor[glance::db::end] to Anchor[glance::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::dbsync::begin] to Anchor[glance::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[glance::dbsync::end] to Anchor[glance::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::install::end] to Anchor[glance::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::config::end] to Anchor[glance::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[glance::db::begin] to Class[Glance::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Glance::Db::Mysql] to Anchor[glance::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::install::end] to Anchor[gnocchi::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::config::end] to Anchor[gnocchi::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::db::begin] to Anchor[gnocchi::db::end] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::db::end] to Anchor[gnocchi::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::dbsync::begin] to Anchor[gnocchi::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[gnocchi::dbsync::end] to Anchor[gnocchi::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::install::end] to Anchor[gnocchi::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::config::end] to Anchor[gnocchi::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[gnocchi::db::begin] to Class[Gnocchi::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Gnocchi::Db::Mysql] to Anchor[gnocchi::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[heat::install::end] to Anchor[heat::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[heat::config::end] to Anchor[heat::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[heat::db::begin] to Anchor[heat::db::end] with 'before'", > "Debug: Adding relationship from Anchor[heat::db::end] to Anchor[heat::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::dbsync::begin] to Anchor[heat::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[heat::dbsync::end] to Anchor[heat::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::install::end] to Anchor[heat::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::config::end] to Anchor[heat::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[heat::db::begin] to Class[Heat::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Heat::Db::Mysql] to Anchor[heat::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::install::end] to Anchor[keystone::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[keystone::config::end] to Anchor[keystone::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[keystone::db::begin] to Anchor[keystone::db::end] with 'before'", > "Debug: Adding relationship from Anchor[keystone::db::end] to Anchor[keystone::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::dbsync::begin] to Anchor[keystone::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[keystone::dbsync::end] to Anchor[keystone::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::install::end] to Anchor[keystone::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::config::end] to Anchor[keystone::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[keystone::db::begin] to Class[Keystone::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Keystone::Db::Mysql] to Anchor[keystone::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::install::end] to Anchor[neutron::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[neutron::config::end] to Anchor[neutron::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[neutron::db::begin] to Anchor[neutron::db::end] with 'before'", > "Debug: Adding relationship from Anchor[neutron::db::end] to Anchor[neutron::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::dbsync::begin] to Anchor[neutron::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[neutron::dbsync::end] to Anchor[neutron::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::install::end] to Anchor[neutron::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::config::end] to Anchor[neutron::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[neutron::db::begin] to Class[Neutron::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Neutron::Db::Mysql] to Anchor[neutron::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::install::end] to Anchor[nova::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[nova::config::end] to Anchor[nova::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Anchor[nova::db::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::db::end] to Anchor[nova::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[nova::install::end] to Anchor[nova::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[nova::config::end] to Anchor[nova::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[nova::dbsync_api::begin] to Anchor[nova::dbsync_api::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::dbsync::begin] to Anchor[nova::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::cell_v2::begin] to Anchor[nova::cell_v2::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::db_online_data_migrations::begin] to Anchor[nova::db_online_data_migrations::end] with 'before'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Class[Nova::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Nova::Db::Mysql] to Anchor[nova::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Class[Nova::Db::Mysql_api] with 'notify'", > "Debug: Adding relationship from Class[Nova::Db::Mysql_api] to Anchor[nova::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[nova::db::begin] to Class[Nova::Db::Mysql_placement] with 'notify'", > "Debug: Adding relationship from Class[Nova::Db::Mysql_placement] to Anchor[nova::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::install::end] to Anchor[sahara::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[sahara::config::end] to Anchor[sahara::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[sahara::db::begin] to Anchor[sahara::db::end] with 'before'", > "Debug: Adding relationship from Anchor[sahara::db::end] to Anchor[sahara::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::dbsync::begin] to Anchor[sahara::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[sahara::dbsync::end] to Anchor[sahara::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::install::end] to Anchor[sahara::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::config::end] to Anchor[sahara::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[sahara::db::begin] to Class[Sahara::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Sahara::Db::Mysql] to Anchor[sahara::db::end] with 'notify'", > "Debug: Adding relationship from Anchor[panko::install::end] to Anchor[panko::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[panko::config::end] to Anchor[panko::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[panko::db::begin] to Anchor[panko::db::end] with 'before'", > "Debug: Adding relationship from Anchor[panko::db::end] to Anchor[panko::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::dbsync::begin] to Anchor[panko::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[panko::dbsync::end] to Anchor[panko::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::install::end] to Anchor[panko::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::config::end] to Anchor[panko::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[panko::db::begin] to Class[Panko::Db::Mysql] with 'notify'", > "Debug: Adding relationship from Class[Panko::Db::Mysql] to Anchor[panko::db::end] with 'notify'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@127.0.0.1] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@::1] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@localhost] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[root@controller-0] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[@controller-0] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[aodh@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[aodh@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[aodh@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[cinder@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[cinder@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[cinder@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[glance@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[glance@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[glance@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[gnocchi@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[gnocchi@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[gnocchi@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[heat@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[heat@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[heat@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[keystone@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[keystone@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[keystone@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[neutron@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[neutron@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[neutron@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_api@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_api@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_api@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_placement@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_placement@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[nova_placement@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[sahara@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[sahara@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[sahara@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[panko@%] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[panko@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_user[panko@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[aodh@%/aodh.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[aodh@172.17.1.16/aodh.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[aodh@172.17.1.17/aodh.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[cinder@%/cinder.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[cinder@172.17.1.16/cinder.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[cinder@172.17.1.17/cinder.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[glance@%/glance.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[glance@172.17.1.16/glance.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[glance@172.17.1.17/glance.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[gnocchi@%/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[gnocchi@172.17.1.16/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[heat@%/heat.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[heat@172.17.1.16/heat.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[heat@172.17.1.17/heat.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[keystone@%/keystone.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[keystone@172.17.1.16/keystone.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[keystone@172.17.1.17/keystone.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[neutron@%/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[neutron@172.17.1.16/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@%/nova.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.16/nova.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.17/nova.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@%/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.16/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova@172.17.1.17/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_api@%/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_api@172.17.1.16/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_api@172.17.1.17/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_placement@%/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_placement@172.17.1.16/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[sahara@%/sahara.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[sahara@172.17.1.16/sahara.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[sahara@172.17.1.17/sahara.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[panko@%/panko.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[panko@172.17.1.16/panko.*] with 'before'", > "Debug: Adding relationship from File[/root/.my.cnf] to Mysql_grant[panko@172.17.1.17/panko.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@127.0.0.1] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@::1] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@localhost] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@localhost.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[root@controller-0] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[@controller-0] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[aodh@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[aodh@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[aodh@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[cinder@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[cinder@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[cinder@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[glance@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[glance@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[glance@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[gnocchi@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[gnocchi@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[gnocchi@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[heat@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[heat@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[heat@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[keystone@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[keystone@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[keystone@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[neutron@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[neutron@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[neutron@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_api@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_api@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_api@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_placement@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_placement@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[nova_placement@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[sahara@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[sahara@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[sahara@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[panko@%] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[panko@172.17.1.16] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_user[panko@172.17.1.17] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[aodh@%/aodh.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[aodh@172.17.1.16/aodh.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[aodh@172.17.1.17/aodh.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[cinder@%/cinder.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[cinder@172.17.1.16/cinder.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[cinder@172.17.1.17/cinder.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[glance@%/glance.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[glance@172.17.1.16/glance.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[glance@172.17.1.17/glance.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[gnocchi@%/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[gnocchi@172.17.1.16/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[heat@%/heat.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[heat@172.17.1.16/heat.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[heat@172.17.1.17/heat.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[keystone@%/keystone.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[keystone@172.17.1.16/keystone.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[keystone@172.17.1.17/keystone.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[neutron@%/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[neutron@172.17.1.16/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@%/nova.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.16/nova.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.17/nova.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@%/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.16/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova@172.17.1.17/nova_cell0.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_api@%/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_api@172.17.1.16/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_api@172.17.1.17/nova_api.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_placement@%/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_placement@172.17.1.16/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[sahara@%/sahara.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[sahara@172.17.1.16/sahara.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[sahara@172.17.1.17/sahara.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[panko@%/panko.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[panko@172.17.1.16/panko.*] with 'before'", > "Debug: Adding relationship from File[/etc/sysconfig/clustercheck] to Mysql_grant[panko@172.17.1.17/panko.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[test] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[aodh] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[cinder] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[glance] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[gnocchi] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[heat] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[keystone] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[ovs_neutron] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova_cell0] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova_api] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[nova_placement] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[sahara] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_database[panko] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@127.0.0.1] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@::1] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@localhost] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@localhost.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@localhost.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@controller-0.localdomain] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[root@controller-0] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[@controller-0] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[aodh@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[aodh@172.17.1.16] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[aodh@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[cinder@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[cinder@172.17.1.16] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[cinder@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[glance@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[glance@172.17.1.16] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[glance@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[gnocchi@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[gnocchi@172.17.1.16] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[gnocchi@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[heat@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[heat@172.17.1.16] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[heat@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[keystone@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[keystone@172.17.1.16] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[keystone@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[neutron@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[neutron@172.17.1.16] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[neutron@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova@172.17.1.16] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_api@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_api@172.17.1.16] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_api@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_placement@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_placement@172.17.1.16] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[nova_placement@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[sahara@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[sahara@172.17.1.16] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[sahara@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[panko@%] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[panko@172.17.1.16] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_user[panko@172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[aodh@%/aodh.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[aodh@172.17.1.16/aodh.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[aodh@172.17.1.17/aodh.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[cinder@%/cinder.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[cinder@172.17.1.16/cinder.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[cinder@172.17.1.17/cinder.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[glance@%/glance.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[glance@172.17.1.16/glance.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[glance@172.17.1.17/glance.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[gnocchi@%/gnocchi.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[gnocchi@172.17.1.16/gnocchi.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[heat@%/heat.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[heat@172.17.1.16/heat.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[heat@172.17.1.17/heat.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[keystone@%/keystone.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[keystone@172.17.1.16/keystone.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[keystone@172.17.1.17/keystone.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[neutron@%/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[neutron@172.17.1.16/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@%/nova.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.16/nova.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.17/nova.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@%/nova_cell0.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.16/nova_cell0.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova@172.17.1.17/nova_cell0.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_api@%/nova_api.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_api@172.17.1.16/nova_api.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_api@172.17.1.17/nova_api.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_placement@%/nova_placement.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_placement@172.17.1.16/nova_placement.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[sahara@%/sahara.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[sahara@172.17.1.16/sahara.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[sahara@172.17.1.17/sahara.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[panko@%/panko.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[panko@172.17.1.16/panko.*] with 'before'", > "Debug: Adding relationship from Exec[galera-ready] to Mysql_grant[panko@172.17.1.17/panko.*] with 'before'", > "Debug: Adding relationship from File[/var/lib/tripleo/pacemaker-restarts] to Exec[galera-master resource restart flag] with 'before'", > "Debug: Adding relationship from Anchor[mysql::client::start] to Class[Mysql::Client::Install] with 'before'", > "Debug: Adding relationship from Class[Mysql::Client::Install] to Anchor[mysql::client::end] with 'before'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[aodh] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[aodh] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[cinder] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[cinder] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[glance] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[glance] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[gnocchi] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[gnocchi] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[heat] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[heat] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[keystone] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[keystone] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[ovs_neutron] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[ovs_neutron] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_cell0] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova_cell0] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_api] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova_api] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[nova_placement] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[nova_placement] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[sahara] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[sahara] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Server] to Mysql_database[panko] with 'notify'", > "Debug: Adding relationship from Class[Mysql::Client] to Mysql_database[panko] with 'notify'", > "Debug: Adding relationship from Mysql_database[aodh] to Mysql_user[aodh@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[aodh@%] to Mysql_grant[aodh@%/aodh.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[aodh] to Mysql_user[aodh@172.17.1.16] with 'notify'", > "Debug: Adding relationship from Mysql_user[aodh@172.17.1.16] to Mysql_grant[aodh@172.17.1.16/aodh.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[aodh] to Mysql_user[aodh@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[aodh@172.17.1.17] to Mysql_grant[aodh@172.17.1.17/aodh.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[cinder] to Mysql_user[cinder@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[cinder@%] to Mysql_grant[cinder@%/cinder.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[cinder] to Mysql_user[cinder@172.17.1.16] with 'notify'", > "Debug: Adding relationship from Mysql_user[cinder@172.17.1.16] to Mysql_grant[cinder@172.17.1.16/cinder.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[cinder] to Mysql_user[cinder@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[cinder@172.17.1.17] to Mysql_grant[cinder@172.17.1.17/cinder.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[glance] to Mysql_user[glance@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[glance@%] to Mysql_grant[glance@%/glance.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[glance] to Mysql_user[glance@172.17.1.16] with 'notify'", > "Debug: Adding relationship from Mysql_user[glance@172.17.1.16] to Mysql_grant[glance@172.17.1.16/glance.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[glance] to Mysql_user[glance@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[glance@172.17.1.17] to Mysql_grant[glance@172.17.1.17/glance.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[gnocchi] to Mysql_user[gnocchi@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[gnocchi@%] to Mysql_grant[gnocchi@%/gnocchi.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[gnocchi] to Mysql_user[gnocchi@172.17.1.16] with 'notify'", > "Debug: Adding relationship from Mysql_user[gnocchi@172.17.1.16] to Mysql_grant[gnocchi@172.17.1.16/gnocchi.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[gnocchi] to Mysql_user[gnocchi@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[gnocchi@172.17.1.17] to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[heat] to Mysql_user[heat@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[heat@%] to Mysql_grant[heat@%/heat.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[heat] to Mysql_user[heat@172.17.1.16] with 'notify'", > "Debug: Adding relationship from Mysql_user[heat@172.17.1.16] to Mysql_grant[heat@172.17.1.16/heat.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[heat] to Mysql_user[heat@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[heat@172.17.1.17] to Mysql_grant[heat@172.17.1.17/heat.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[keystone] to Mysql_user[keystone@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[keystone@%] to Mysql_grant[keystone@%/keystone.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[keystone] to Mysql_user[keystone@172.17.1.16] with 'notify'", > "Debug: Adding relationship from Mysql_user[keystone@172.17.1.16] to Mysql_grant[keystone@172.17.1.16/keystone.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[keystone] to Mysql_user[keystone@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[keystone@172.17.1.17] to Mysql_grant[keystone@172.17.1.17/keystone.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[ovs_neutron] to Mysql_user[neutron@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[neutron@%] to Mysql_grant[neutron@%/ovs_neutron.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[ovs_neutron] to Mysql_user[neutron@172.17.1.16] with 'notify'", > "Debug: Adding relationship from Mysql_user[neutron@172.17.1.16] to Mysql_grant[neutron@172.17.1.16/ovs_neutron.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[ovs_neutron] to Mysql_user[neutron@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[neutron@172.17.1.17] to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova] to Mysql_user[nova@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@%] to Mysql_grant[nova@%/nova.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova] to Mysql_user[nova@172.17.1.16] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.16] to Mysql_grant[nova@172.17.1.16/nova.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova] to Mysql_user[nova@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.17] to Mysql_grant[nova@172.17.1.17/nova.*] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@%] to Mysql_grant[nova@%/nova_cell0.*] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.16] to Mysql_grant[nova@172.17.1.16/nova_cell0.*] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova@172.17.1.17] to Mysql_grant[nova@172.17.1.17/nova_cell0.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_api] to Mysql_user[nova_api@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_api@%] to Mysql_grant[nova_api@%/nova_api.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_api] to Mysql_user[nova_api@172.17.1.16] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_api@172.17.1.16] to Mysql_grant[nova_api@172.17.1.16/nova_api.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_api] to Mysql_user[nova_api@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_api@172.17.1.17] to Mysql_grant[nova_api@172.17.1.17/nova_api.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_placement] to Mysql_user[nova_placement@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_placement@%] to Mysql_grant[nova_placement@%/nova_placement.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_placement] to Mysql_user[nova_placement@172.17.1.16] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_placement@172.17.1.16] to Mysql_grant[nova_placement@172.17.1.16/nova_placement.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[nova_placement] to Mysql_user[nova_placement@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[nova_placement@172.17.1.17] to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[sahara] to Mysql_user[sahara@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[sahara@%] to Mysql_grant[sahara@%/sahara.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[sahara] to Mysql_user[sahara@172.17.1.16] with 'notify'", > "Debug: Adding relationship from Mysql_user[sahara@172.17.1.16] to Mysql_grant[sahara@172.17.1.16/sahara.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[sahara] to Mysql_user[sahara@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[sahara@172.17.1.17] to Mysql_grant[sahara@172.17.1.17/sahara.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[panko] to Mysql_user[panko@%] with 'notify'", > "Debug: Adding relationship from Mysql_user[panko@%] to Mysql_grant[panko@%/panko.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[panko] to Mysql_user[panko@172.17.1.16] with 'notify'", > "Debug: Adding relationship from Mysql_user[panko@172.17.1.16] to Mysql_grant[panko@172.17.1.16/panko.*] with 'notify'", > "Debug: Adding relationship from Mysql_database[panko] to Mysql_user[panko@172.17.1.17] with 'notify'", > "Debug: Adding relationship from Mysql_user[panko@172.17.1.17] to Mysql_grant[panko@172.17.1.17/panko.*] with 'notify'", > "Debug: File[mysql-config-file]: Adding default for owner", > "Debug: File[mysql-config-file]: Adding default for group", > "Debug: File[/etc/my.cnf.d]: Adding default for owner", > "Debug: File[/etc/my.cnf.d]: Adding default for group", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.46 seconds", > "Info: Applying configuration version '1529673573'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[galera]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-galera-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[galera-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Tripleo::Pacemaker::Resource_restart_flag[galera-master]/subscribe: subscribes to File[mysql-config-file]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@127.0.0.1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@::1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@localhost]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[root@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[aodh@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[aodh@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[cinder@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[cinder@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[glance@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[glance@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[gnocchi@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[gnocchi@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[heat@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[heat@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[keystone@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[keystone@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[neutron@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[neutron@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_api@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_api@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_placement@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[nova_placement@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[sahara@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[sahara@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[panko@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_user[panko@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[aodh@172.17.1.16/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[aodh@172.17.1.17/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[cinder@172.17.1.16/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[cinder@172.17.1.17/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[glance@172.17.1.16/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[glance@172.17.1.17/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[gnocchi@172.17.1.16/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[heat@172.17.1.16/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[heat@172.17.1.17/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[keystone@172.17.1.16/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[keystone@172.17.1.17/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[neutron@172.17.1.16/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.16/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.17/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.16/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova@172.17.1.17/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_api@172.17.1.16/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_api@172.17.1.17/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_placement@172.17.1.16/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[sahara@172.17.1.16/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[sahara@172.17.1.17/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[panko@172.17.1.16/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/before: subscribes to Mysql_grant[panko@172.17.1.17/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@127.0.0.1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@::1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@localhost]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[root@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[aodh@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[aodh@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[cinder@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[cinder@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[glance@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[glance@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[gnocchi@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[gnocchi@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[heat@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[heat@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[keystone@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[keystone@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[neutron@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[neutron@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_api@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_api@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_placement@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[nova_placement@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[sahara@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[sahara@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[panko@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_user[panko@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[aodh@172.17.1.16/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[aodh@172.17.1.17/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[cinder@172.17.1.16/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[cinder@172.17.1.17/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[glance@172.17.1.16/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[glance@172.17.1.17/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[gnocchi@172.17.1.16/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[heat@172.17.1.16/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[heat@172.17.1.17/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[keystone@172.17.1.16/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[keystone@172.17.1.17/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[neutron@172.17.1.16/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.16/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.17/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.16/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova@172.17.1.17/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_api@172.17.1.16/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_api@172.17.1.17/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_placement@172.17.1.16/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[sahara@172.17.1.16/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[sahara@172.17.1.17/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[panko@172.17.1.16/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/before: subscribes to Mysql_grant[panko@172.17.1.17/panko.*]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Mysql::Server/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Mysql::Server/notify: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Mysql::Server::Config/before: subscribes to Class[Mysql::Server::Binarylog]", > "Debug: /Stage[main]/Mysql::Server::Install/before: subscribes to Class[Mysql::Server::Config]", > "Debug: /Stage[main]/Mysql::Server::Binarylog/before: subscribes to Class[Mysql::Server::Installdb]", > "Debug: /Stage[main]/Mysql::Server::Installdb/before: subscribes to Class[Mysql::Server::Service]", > "Debug: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/require: subscribes to Mysql_datadir[/var/lib/mysql]", > "Debug: /Stage[main]/Mysql::Server::Service/before: subscribes to Class[Mysql::Server::Root_password]", > "Debug: /Stage[main]/Mysql::Server::Root_password/before: subscribes to Class[Mysql::Server::Providers]", > "Debug: /Stage[main]/Mysql::Server::Providers/before: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@::1]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@%]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@localhost.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0.localdomain]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_database[test]/require: subscribes to Anchor[mysql::server::end]", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::start]/before: subscribes to Class[Mysql::Server::Install]", > "Debug: /Stage[main]/Aodh::Db::Mysql/notify: subscribes to Anchor[aodh::db::end]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]/before: subscribes to Anchor[aodh::config::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]/notify: subscribes to Anchor[aodh::service::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]/before: subscribes to Anchor[aodh::db::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]/notify: subscribes to Anchor[aodh::service::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]/before: subscribes to Anchor[aodh::db::end]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]/notify: subscribes to Class[Aodh::Db::Mysql]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::end]/notify: subscribes to Anchor[aodh::dbsync::begin]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::begin]/before: subscribes to Anchor[aodh::dbsync::end]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::end]/notify: subscribes to Anchor[aodh::service::begin]", > "Debug: /Stage[main]/Cinder::Db::Mysql/notify: subscribes to Anchor[cinder::db::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]/before: subscribes to Anchor[cinder::config::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]/before: subscribes to Anchor[cinder::db::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]/before: subscribes to Anchor[cinder::db::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]/notify: subscribes to Class[Cinder::Db::Mysql]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]/notify: subscribes to Anchor[cinder::dbsync::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]/before: subscribes to Anchor[cinder::dbsync::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Glance::Db::Mysql/notify: subscribes to Anchor[glance::db::end]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]/before: subscribes to Anchor[glance::config::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]/notify: subscribes to Anchor[glance::service::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]/before: subscribes to Anchor[glance::db::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]/notify: subscribes to Anchor[glance::service::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]/before: subscribes to Anchor[glance::db::end]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]/notify: subscribes to Class[Glance::Db::Mysql]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::end]/notify: subscribes to Anchor[glance::dbsync::begin]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]/before: subscribes to Anchor[glance::dbsync::end]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]/notify: subscribes to Anchor[glance::service::begin]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/notify: subscribes to Anchor[gnocchi::db::end]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]/before: subscribes to Anchor[gnocchi::config::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]/notify: subscribes to Anchor[gnocchi::service::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]/before: subscribes to Anchor[gnocchi::db::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]/notify: subscribes to Anchor[gnocchi::service::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]/before: subscribes to Anchor[gnocchi::db::end]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]/notify: subscribes to Class[Gnocchi::Db::Mysql]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::end]/notify: subscribes to Anchor[gnocchi::dbsync::begin]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::begin]/before: subscribes to Anchor[gnocchi::dbsync::end]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::end]/notify: subscribes to Anchor[gnocchi::service::begin]", > "Debug: /Stage[main]/Heat::Db::Mysql/notify: subscribes to Anchor[heat::db::end]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]/before: subscribes to Anchor[heat::config::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]/notify: subscribes to Anchor[heat::service::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]/before: subscribes to Anchor[heat::db::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]/notify: subscribes to Anchor[heat::service::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]/before: subscribes to Anchor[heat::db::end]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]/notify: subscribes to Class[Heat::Db::Mysql]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::end]/notify: subscribes to Anchor[heat::dbsync::begin]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]/before: subscribes to Anchor[heat::dbsync::end]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]/notify: subscribes to Anchor[heat::service::begin]", > "Debug: /Stage[main]/Keystone::Db::Mysql/notify: subscribes to Anchor[keystone::db::end]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]/before: subscribes to Anchor[keystone::config::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]/notify: subscribes to Anchor[keystone::service::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]/before: subscribes to Anchor[keystone::db::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]/notify: subscribes to Anchor[keystone::service::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]/before: subscribes to Anchor[keystone::db::end]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]/notify: subscribes to Class[Keystone::Db::Mysql]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]/notify: subscribes to Anchor[keystone::dbsync::begin]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]/before: subscribes to Anchor[keystone::dbsync::end]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]/notify: subscribes to Anchor[keystone::service::begin]", > "Debug: /Stage[main]/Neutron::Db::Mysql/notify: subscribes to Anchor[neutron::db::end]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]/before: subscribes to Anchor[neutron::config::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]/notify: subscribes to Anchor[neutron::service::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]/before: subscribes to Anchor[neutron::db::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]/notify: subscribes to Anchor[neutron::service::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]/before: subscribes to Anchor[neutron::db::end]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]/notify: subscribes to Class[Neutron::Db::Mysql]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]/notify: subscribes to Anchor[neutron::dbsync::begin]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]/before: subscribes to Anchor[neutron::dbsync::end]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]/notify: subscribes to Anchor[neutron::service::begin]", > "Debug: /Stage[main]/Nova::Db::Mysql/notify: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]/before: subscribes to Anchor[nova::config::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]/before: subscribes to Anchor[nova::db::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/before: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/notify: subscribes to Class[Nova::Db::Mysql]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/notify: subscribes to Class[Nova::Db::Mysql_api]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]/notify: subscribes to Class[Nova::Db::Mysql_placement]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]/subscribe: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]/before: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]/subscribe: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]/subscribe: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]/before: subscribes to Anchor[nova::dbsync::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]/subscribe: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]/notify: subscribes to Anchor[nova::cell_v2::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]/notify: subscribes to Anchor[nova::dbsync::begin]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]/subscribe: subscribes to Anchor[nova::dbsync_api::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]/before: subscribes to Anchor[nova::db_online_data_migrations::end]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::end]/notify: subscribes to Anchor[nova::service::begin]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/notify: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/notify: subscribes to Anchor[nova::db::end]", > "Debug: /Stage[main]/Sahara::Db::Mysql/notify: subscribes to Anchor[sahara::db::end]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]/before: subscribes to Anchor[sahara::config::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]/notify: subscribes to Anchor[sahara::service::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]/before: subscribes to Anchor[sahara::db::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]/notify: subscribes to Anchor[sahara::service::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]/before: subscribes to Anchor[sahara::db::end]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]/notify: subscribes to Class[Sahara::Db::Mysql]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::end]/notify: subscribes to Anchor[sahara::dbsync::begin]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::begin]/before: subscribes to Anchor[sahara::dbsync::end]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::end]/notify: subscribes to Anchor[sahara::service::begin]", > "Debug: /Stage[main]/Panko::Db::Mysql/notify: subscribes to Anchor[panko::db::end]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]/before: subscribes to Anchor[panko::config::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]/notify: subscribes to Anchor[panko::service::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]/before: subscribes to Anchor[panko::db::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]/notify: subscribes to Anchor[panko::service::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]/before: subscribes to Anchor[panko::db::end]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]/notify: subscribes to Class[Panko::Db::Mysql]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::end]/notify: subscribes to Anchor[panko::dbsync::begin]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::begin]/before: subscribes to Anchor[panko::dbsync::end]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::end]/notify: subscribes to Anchor[panko::service::begin]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Property[galera-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[galera-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/require: subscribes to Class[Mysql::Server]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/require: subscribes to Pacemaker::Resource::Bundle[galera-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/before: subscribes to Exec[galera-ready]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[test]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@127.0.0.1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@::1]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@localhost]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@localhost.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@controller-0.localdomain]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[root@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[@controller-0]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[aodh@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[aodh@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[cinder@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[cinder@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[glance@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[glance@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[gnocchi@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[gnocchi@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[heat@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[heat@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[keystone@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[keystone@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[neutron@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[neutron@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_api@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_api@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_placement@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[nova_placement@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[sahara@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[sahara@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[panko@172.17.1.16]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_user[panko@172.17.1.17]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[aodh@172.17.1.16/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[aodh@172.17.1.17/aodh.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[cinder@172.17.1.16/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[cinder@172.17.1.17/cinder.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[glance@172.17.1.16/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[glance@172.17.1.17/glance.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[gnocchi@172.17.1.16/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[heat@172.17.1.16/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[heat@172.17.1.17/heat.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[keystone@172.17.1.16/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[keystone@172.17.1.17/keystone.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[neutron@172.17.1.16/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.16/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.17/nova.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.16/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova@172.17.1.17/nova_cell0.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_api@172.17.1.16/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_api@172.17.1.17/nova_api.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_placement@172.17.1.16/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[sahara@172.17.1.16/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[sahara@172.17.1.17/sahara.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[panko@172.17.1.16/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/before: subscribes to Mysql_grant[panko@172.17.1.17/panko.*]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Tripleo::Pacemaker::Resource_restart_flag[galera-master]/File[/var/lib/tripleo/pacemaker-restarts]/before: subscribes to Exec[galera-master resource restart flag]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[aodh]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[cinder]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[glance]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[gnocchi]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[heat]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[keystone]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[ovs_neutron]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova_cell0]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova_api]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[nova_placement]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[sahara]", > "Debug: /Stage[main]/Mysql::Client/notify: subscribes to Mysql_database[panko]", > "Debug: /Stage[main]/Mysql::Client::Install/before: subscribes to Anchor[mysql::client::end]", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::start]/before: subscribes to Class[Mysql::Client::Install]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/notify: subscribes to Mysql_user[aodh@%]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/notify: subscribes to Mysql_user[aodh@172.17.1.16]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/notify: subscribes to Mysql_user[aodh@172.17.1.17]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/notify: subscribes to Mysql_user[cinder@%]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/notify: subscribes to Mysql_user[cinder@172.17.1.16]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/notify: subscribes to Mysql_user[cinder@172.17.1.17]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/notify: subscribes to Mysql_user[glance@%]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/notify: subscribes to Mysql_user[glance@172.17.1.16]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/notify: subscribes to Mysql_user[glance@172.17.1.17]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/notify: subscribes to Mysql_user[gnocchi@%]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/notify: subscribes to Mysql_user[gnocchi@172.17.1.16]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/notify: subscribes to Mysql_user[gnocchi@172.17.1.17]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/notify: subscribes to Mysql_user[heat@%]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/notify: subscribes to Mysql_user[heat@172.17.1.16]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/notify: subscribes to Mysql_user[heat@172.17.1.17]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/notify: subscribes to Mysql_user[keystone@%]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/notify: subscribes to Mysql_user[keystone@172.17.1.16]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/notify: subscribes to Mysql_user[keystone@172.17.1.17]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/notify: subscribes to Mysql_user[neutron@%]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/notify: subscribes to Mysql_user[neutron@172.17.1.16]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/notify: subscribes to Mysql_user[neutron@172.17.1.17]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/notify: subscribes to Mysql_user[nova@%]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/notify: subscribes to Mysql_user[nova@172.17.1.16]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/notify: subscribes to Mysql_user[nova@172.17.1.17]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/notify: subscribes to Mysql_user[nova_api@%]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/notify: subscribes to Mysql_user[nova_api@172.17.1.16]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/notify: subscribes to Mysql_user[nova_api@172.17.1.17]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/notify: subscribes to Mysql_user[nova_placement@%]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/notify: subscribes to Mysql_user[nova_placement@172.17.1.16]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/notify: subscribes to Mysql_user[nova_placement@172.17.1.17]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/notify: subscribes to Mysql_user[sahara@%]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/notify: subscribes to Mysql_user[sahara@172.17.1.16]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/notify: subscribes to Mysql_user[sahara@172.17.1.17]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/notify: subscribes to Mysql_user[panko@%]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/notify: subscribes to Mysql_user[panko@172.17.1.16]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/notify: subscribes to Mysql_user[panko@172.17.1.17]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_user[aodh@%]/notify: subscribes to Mysql_grant[aodh@%/aodh.*]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.16]/Mysql_user[aodh@172.17.1.16]/notify: subscribes to Mysql_grant[aodh@172.17.1.16/aodh.*]", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]/Mysql_user[aodh@172.17.1.17]/notify: subscribes to Mysql_grant[aodh@172.17.1.17/aodh.*]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_user[cinder@%]/notify: subscribes to Mysql_grant[cinder@%/cinder.*]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.16]/Mysql_user[cinder@172.17.1.16]/notify: subscribes to Mysql_grant[cinder@172.17.1.16/cinder.*]", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]/Mysql_user[cinder@172.17.1.17]/notify: subscribes to Mysql_grant[cinder@172.17.1.17/cinder.*]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_user[glance@%]/notify: subscribes to Mysql_grant[glance@%/glance.*]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.16]/Mysql_user[glance@172.17.1.16]/notify: subscribes to Mysql_grant[glance@172.17.1.16/glance.*]", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]/Mysql_user[glance@172.17.1.17]/notify: subscribes to Mysql_grant[glance@172.17.1.17/glance.*]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_user[gnocchi@%]/notify: subscribes to Mysql_grant[gnocchi@%/gnocchi.*]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.16]/Mysql_user[gnocchi@172.17.1.16]/notify: subscribes to Mysql_grant[gnocchi@172.17.1.16/gnocchi.*]", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]/Mysql_user[gnocchi@172.17.1.17]/notify: subscribes to Mysql_grant[gnocchi@172.17.1.17/gnocchi.*]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]/notify: subscribes to Mysql_grant[heat@%/heat.*]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.16]/Mysql_user[heat@172.17.1.16]/notify: subscribes to Mysql_grant[heat@172.17.1.16/heat.*]", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]/Mysql_user[heat@172.17.1.17]/notify: subscribes to Mysql_grant[heat@172.17.1.17/heat.*]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone@%]/notify: subscribes to Mysql_grant[keystone@%/keystone.*]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.16]/Mysql_user[keystone@172.17.1.16]/notify: subscribes to Mysql_grant[keystone@172.17.1.16/keystone.*]", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]/Mysql_user[keystone@172.17.1.17]/notify: subscribes to Mysql_grant[keystone@172.17.1.17/keystone.*]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_user[neutron@%]/notify: subscribes to Mysql_grant[neutron@%/ovs_neutron.*]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.16]/Mysql_user[neutron@172.17.1.16]/notify: subscribes to Mysql_grant[neutron@172.17.1.16/ovs_neutron.*]", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]/Mysql_user[neutron@172.17.1.17]/notify: subscribes to Mysql_grant[neutron@172.17.1.17/ovs_neutron.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]/notify: subscribes to Mysql_grant[nova@%/nova.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]/notify: subscribes to Mysql_grant[nova@%/nova_cell0.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.16]/Mysql_user[nova@172.17.1.16]/notify: subscribes to Mysql_grant[nova@172.17.1.16/nova.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.16]/Mysql_user[nova@172.17.1.16]/notify: subscribes to Mysql_grant[nova@172.17.1.16/nova_cell0.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]/Mysql_user[nova@172.17.1.17]/notify: subscribes to Mysql_grant[nova@172.17.1.17/nova.*]", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]/Mysql_user[nova@172.17.1.17]/notify: subscribes to Mysql_grant[nova@172.17.1.17/nova_cell0.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_user[nova_api@%]/notify: subscribes to Mysql_grant[nova_api@%/nova_api.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.16]/Mysql_user[nova_api@172.17.1.16]/notify: subscribes to Mysql_grant[nova_api@172.17.1.16/nova_api.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]/Mysql_user[nova_api@172.17.1.17]/notify: subscribes to Mysql_grant[nova_api@172.17.1.17/nova_api.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_user[nova_placement@%]/notify: subscribes to Mysql_grant[nova_placement@%/nova_placement.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.16]/Mysql_user[nova_placement@172.17.1.16]/notify: subscribes to Mysql_grant[nova_placement@172.17.1.16/nova_placement.*]", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]/Mysql_user[nova_placement@172.17.1.17]/notify: subscribes to Mysql_grant[nova_placement@172.17.1.17/nova_placement.*]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_user[sahara@%]/notify: subscribes to Mysql_grant[sahara@%/sahara.*]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.16]/Mysql_user[sahara@172.17.1.16]/notify: subscribes to Mysql_grant[sahara@172.17.1.16/sahara.*]", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]/Mysql_user[sahara@172.17.1.17]/notify: subscribes to Mysql_grant[sahara@172.17.1.17/sahara.*]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_user[panko@%]/notify: subscribes to Mysql_grant[panko@%/panko.*]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.16]/Mysql_user[panko@172.17.1.16]/notify: subscribes to Mysql_grant[panko@172.17.1.16/panko.*]", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]/Mysql_user[panko@172.17.1.17]/notify: subscribes to Mysql_grant[panko@172.17.1.17/panko.*]", > "Debug: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: Adding autorequire relationship with File[/etc/my.cnf.d]", > "Debug: /Stage[main]/Mysql::Server::Installdb/Mysql_datadir[/var/lib/mysql]: Adding autorequire relationship with Package[mysql-server]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Tripleo::Pacemaker::Resource_restart_flag[galera-master]/File[/var/lib/tripleo/pacemaker-restarts]: Adding autorequire relationship with File[/var/lib/tripleo]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/ensure: defined content as '{md5}e51811cf726fa3e6a5a924a379dc5198'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]: The container Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}5a169246460baf3e552027b0f5e8a1f8'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]: The container Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle] will propagate my refresh event", > "Debug: Class[Tripleo::Profile::Base::Database::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Tripleo::Profile::Base::Database::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::start]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::start]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server::Install/Package[mysql-server]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server::Install/Package[mysql-server]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Config]: Resource is being skipped, unscheduling all events", > "Info: Computing checksum on file /etc/my.cnf.d/galera.cnf", > "Info: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: Filebucketed /etc/my.cnf.d/galera.cnf to puppet with sum af90358207ccfecae7af249d5ef7dd3e", > "Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}af90358207ccfecae7af249d5ef7dd3e' to '{md5}da920df6baf6c7424ed796c11086927e'", > "Info: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: Scheduling refresh of Tripleo::Pacemaker::Resource_restart_flag[galera-master]", > "Debug: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]: The container Class[Mysql::Server::Config] will propagate my refresh event", > "Debug: Tripleo::Pacemaker::Resource_restart_flag[galera-master]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Tripleo::Pacemaker::Resource_restart_flag[galera-master]: Resource is being skipped, unscheduling all events", > "Info: Tripleo::Pacemaker::Resource_restart_flag[galera-master]: Unscheduling all events on Tripleo::Pacemaker::Resource_restart_flag[galera-master]", > "Info: Class[Mysql::Server::Config]: Unscheduling all events on Class[Mysql::Server::Config]", > "Debug: Class[Mysql::Server::Binarylog]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Binarylog]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Installdb]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Installdb]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server::Installdb/Mysql_datadir[/var/lib/mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server::Installdb/Mysql_datadir[/var/lib/mysql]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/ensure: created", > "Debug: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]: The container Class[Mysql::Server::Installdb] will propagate my refresh event", > "Info: Class[Mysql::Server::Installdb]: Unscheduling all events on Class[Mysql::Server::Installdb]", > "Debug: Class[Mysql::Server::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Service]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Root_password]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Root_password]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server::Root_password/Exec[remove install pass]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server::Root_password/Exec[remove install pass]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Providers]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Providers]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Server/Anchor[mysql::server::end]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Server::Account_security]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Server::Account_security]: Resource is being skipped, unscheduling all events", > "Debug: Class[Aodh::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Aodh::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Aodh::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Aodh::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[aodh]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Cinder::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Cinder::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[cinder]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Class[Glance::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Glance::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Glance::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Glance::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[glance]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[glance]: Resource is being skipped, unscheduling all events", > "Debug: Class[Gnocchi::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Gnocchi::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Gnocchi::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Gnocchi::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[gnocchi]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Class[Heat::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Heat::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Heat::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Heat::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[heat]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[heat]: Resource is being skipped, unscheduling all events", > "Debug: Class[Keystone::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Keystone::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Keystone::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Keystone::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[keystone]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[keystone]: Resource is being skipped, unscheduling all events", > "Debug: Class[Neutron::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Neutron::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Neutron::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Neutron::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[neutron]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova_cell0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova_cell0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Db::Mysql_api]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Db::Mysql_api]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova_api]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Class[Nova::Db::Mysql_placement]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Nova::Db::Mysql_placement]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[nova_placement]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Class[Sahara::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Sahara::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Sahara::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Sahara::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[sahara]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Class[Panko::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Panko::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Panko::Db::Mysql]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Panko::Db::Mysql]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql[panko]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql[panko]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[galera-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Property[galera-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-p6tzgd returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-p6tzgd property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Tripleo::Pacemaker::Resource_restart_flag[galera-master]/File[/var/lib/tripleo]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Tripleo::Pacemaker::Resource_restart_flag[galera-master]/File[/var/lib/tripleo]: The container Tripleo::Pacemaker::Resource_restart_flag[galera-master] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Tripleo::Pacemaker::Resource_restart_flag[galera-master]/File[/var/lib/tripleo/pacemaker-restarts]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Tripleo::Pacemaker::Resource_restart_flag[galera-master]/File[/var/lib/tripleo/pacemaker-restarts]: The container Tripleo::Pacemaker::Resource_restart_flag[galera-master] will propagate my refresh event", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Tripleo::Pacemaker::Resource_restart_flag[galera-master]/Exec[galera-master resource restart flag]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Tripleo::Pacemaker::Resource_restart_flag[galera-master]/Exec[galera-master resource restart flag]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Client]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Client]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::start]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::start]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Client::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Class[Mysql::Client::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Client::Install/Package[mysql_client]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Client::Install/Package[mysql_client]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Mysql::Client/Anchor[mysql::client::end]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.16]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.16]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.16]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.16]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.16]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.16]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.16]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.16]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.16]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.16]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.16]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.16]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.16]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.16]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.16]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.16]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.16]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.16]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.16]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.16]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.16]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.16]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.16]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.16]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_%]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_%]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.16]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.16]: Resource is being skipped, unscheduling all events", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-yr9yu7 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-yr9yu7 property show | grep galera-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep galera-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1oxnndq returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1oxnndq property set --node controller-0 galera-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1oxnndq diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1oxnndq.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 galera-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Property[galera-role-controller-0]/Pcmk_property[property-controller-0-galera-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Property[galera-role-controller-0]/Pcmk_property[property-controller-0-galera-role]: The container Pacemaker::Property[galera-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[galera-role-controller-0]: Unscheduling all events on Pacemaker::Property[galera-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[galera-bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Resource::Bundle[galera-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-171lhcs returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-171lhcs constraint list | grep location-galera-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1cnwqn1 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1cnwqn1 resource show galera-bundle > /dev/null 2>&1", > "Debug: Exists: bundle galera-bundle exists 1 location exists 1 deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1i0bjof returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1i0bjof resource bundle create galera-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest replicas=1 masters=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=mysql-cfg-files source-dir=/var/lib/kolla/config_files/mysql.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=mysql-cfg-data source-dir=/var/lib/config-data/puppet-generated/mysql/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=mysql-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=mysql-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=mysql-lib source-dir=/var/lib/mysql target-dir=/var/lib/mysql options=rw storage-map id=mysql-log-mariadb source-dir=/var/log/mariadb target-dir=/var/log/mariadb options=rw storage-map id=mysql-log source-dir=/var/log/containers/mysql target-dir=/var/log/mysql options=rw storage-map id=mysql-dev-log source-dir=/dev/log target-dir=/dev/log options=rw network control-port=3123 --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1i0bjof diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1i0bjof.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location galera-bundle rule resource-discovery=exclusive score=0 galera-role eq true", > "Debug: location_rule_create: constraint location galera-bundle rule resource-discovery=exclusive score=0 galera-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-b1niie returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-b1niie constraint location galera-bundle rule resource-discovery=exclusive score=0 galera-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-b1niie diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-9-b1niie.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-s34pf1 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-s34pf1 resource enable galera-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-s34pf1 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-9-s34pf1.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Bundle[galera-bundle]/Pcmk_bundle[galera-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Bundle[galera-bundle]/Pcmk_bundle[galera-bundle]: The container Pacemaker::Resource::Bundle[galera-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[galera-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[galera-bundle]", > "Debug: Pacemaker::Resource::Ocf[galera]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Pacemaker::Resource::Ocf[galera]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-i734mf returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-i734mf constraint list | grep location-galera-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1hks2j0 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1hks2j0 resource show galera > /dev/null 2>&1", > "Debug: Exists: resource galera exists 1 location exists 0 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1garsi1 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1garsi1 resource create galera ocf:heartbeat:galera log='/var/log/mysql/mysqld.log' additional_parameters='--open-files-limit=16384' enable_creation=true wsrep_cluster_address='gcomm://controller-0.internalapi.localdomain' cluster_host_map='controller-0:controller-0.internalapi.localdomain' meta master-max=1 ordered=true container-attribute-target=host op promote timeout=300s on-fail=block bundle galera-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1garsi1 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-9-1garsi1.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/Pcmk_resource[galera]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Pacemaker::Resource::Ocf[galera]/Pcmk_resource[galera]: The container Pacemaker::Resource::Ocf[galera] will propagate my refresh event", > "Info: Pacemaker::Resource::Ocf[galera]: Unscheduling all events on Pacemaker::Resource::Ocf[galera]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 1/180", > "Debug: Exec[galera-ready](provider=posix): Executing '/usr/bin/clustercheck >/dev/null'", > "Debug: Executing: '/usr/bin/clustercheck >/dev/null'", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Sleeping for 10 seconds between tries", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 2/180", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 3/180", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: Exec try 4/180", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]/returns: executed successfully", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/Exec[galera-ready]: The container Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle] will propagate my refresh event", > "Info: Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]: Unscheduling all events on Class[Tripleo::Profile::Pacemaker::Database::Mysql_bundle]", > "Debug: Prefetching mysql resources for mysql_user", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT CONCAT(User, '@',Host) AS User FROM mysql.user'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@%''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@127.0.0.1''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@::1''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@controller-0''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'clustercheck@localhost''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT MAX_USER_CONNECTIONS, MAX_CONNECTIONS, MAX_QUESTIONS, MAX_UPDATES, SSL_TYPE, SSL_CIPHER, X509_ISSUER, X509_SUBJECT, PASSWORD /*!50508 , PLUGIN */ FROM mysql.user WHERE CONCAT(user, '@', host) = 'root@localhost''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e DROP USER IF EXISTS 'root'@'127.0.0.1''", > "Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]/ensure: removed", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@127.0.0.1]: The container Class[Mysql::Server::Account_security] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e DROP USER IF EXISTS 'root'@'::1''", > "Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@::1]/ensure: removed", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@::1]: The container Class[Mysql::Server::Account_security] will propagate my refresh event", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@%]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@localhost.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@localhost.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0.localdomain]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e DROP USER IF EXISTS 'root'@'controller-0''", > "Notice: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0]/ensure: removed", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[root@controller-0]: The container Class[Mysql::Server::Account_security] will propagate my refresh event", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_user[@controller-0]: Nothing to manage: no ensure and the resource doesn't exist", > "Debug: Prefetching mysql resources for mysql_database", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show databases'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show variables like '%_database' information_schema'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show variables like '%_database' mysql'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe show variables like '%_database' performance_schema'", > "Debug: /Stage[main]/Mysql::Server::Account_security/Mysql_database[test]: Nothing to manage: no ensure and the resource doesn't exist", > "Info: Class[Mysql::Server::Account_security]: Unscheduling all events on Class[Mysql::Server::Account_security]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `aodh` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Mysql_database[aodh]: The container Openstacklib::Db::Mysql[aodh] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `cinder` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Mysql_database[cinder]: The container Openstacklib::Db::Mysql[cinder] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `glance` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Mysql_database[glance]: The container Openstacklib::Db::Mysql[glance] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `gnocchi` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Mysql_database[gnocchi]: The container Openstacklib::Db::Mysql[gnocchi] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `heat` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Mysql_database[heat]: The container Openstacklib::Db::Mysql[heat] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `keystone` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Mysql_database[keystone]: The container Openstacklib::Db::Mysql[keystone] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `ovs_neutron` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Mysql_database[ovs_neutron]: The container Openstacklib::Db::Mysql[neutron] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Mysql_database[nova]: The container Openstacklib::Db::Mysql[nova] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova_cell0` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Mysql_database[nova_cell0]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Mysql_database[nova_cell0]: The container Openstacklib::Db::Mysql[nova_cell0] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova_api` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Mysql_database[nova_api]: The container Openstacklib::Db::Mysql[nova_api] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `nova_placement` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Mysql_database[nova_placement]: The container Openstacklib::Db::Mysql[nova_placement] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `sahara` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Mysql_database[sahara]: The container Openstacklib::Db::Mysql[sahara] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe create database if not exists `panko` character set `utf8` collate `utf8_general_ci`'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Mysql_database[panko]: The container Openstacklib::Db::Mysql[panko] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'aodh'@'%' IDENTIFIED BY PASSWORD '*A395604AB048A31DBB820CC9AD661EE971AD81D6''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_user[aodh@%]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_user[aodh@%]: The container Openstacklib::Db::Mysql::Host_access[aodh_%] will propagate my refresh event", > "Debug: Prefetching mysql resources for mysql_grant", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'aodh'@'%';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'root'@'%';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'clustercheck'@'localhost';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SHOW GRANTS FOR 'root'@'localhost';'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `aodh`.* TO 'aodh'@'%''", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_grant[aodh@%/aodh.*]/ensure: created", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe FLUSH PRIVILEGES'", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_%]/Mysql_grant[aodh@%/aodh.*]: The container Openstacklib::Db::Mysql::Host_access[aodh_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[aodh_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[aodh_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'aodh'@'172.17.1.16' IDENTIFIED BY PASSWORD '*A395604AB048A31DBB820CC9AD661EE971AD81D6''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.16' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.16' REQUIRE NONE'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.16]/Mysql_user[aodh@172.17.1.16]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.16]/Mysql_user[aodh@172.17.1.16]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.16] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `aodh`.* TO 'aodh'@'172.17.1.16''", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.16]/Mysql_grant[aodh@172.17.1.16/aodh.*]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.16]/Mysql_grant[aodh@172.17.1.16/aodh.*]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.16] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.16]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.16]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'aodh'@'172.17.1.17' IDENTIFIED BY PASSWORD '*A395604AB048A31DBB820CC9AD661EE971AD81D6''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'aodh'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]/Mysql_user[aodh@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]/Mysql_user[aodh@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `aodh`.* TO 'aodh'@'172.17.1.17''", > "Notice: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]/Mysql_grant[aodh@172.17.1.17/aodh.*]/ensure: created", > "Debug: /Stage[main]/Aodh::Db::Mysql/Openstacklib::Db::Mysql[aodh]/Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]/Mysql_grant[aodh@172.17.1.17/aodh.*]: The container Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[aodh_172.17.1.17]", > "Info: Openstacklib::Db::Mysql[aodh]: Unscheduling all events on Openstacklib::Db::Mysql[aodh]", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Aodh::Deps/Anchor[aodh::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'cinder'@'%' IDENTIFIED BY PASSWORD '*C7DDB409A9EA01E431C00367101D63B6AD27E2B2''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_user[cinder@%]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_user[cinder@%]: The container Openstacklib::Db::Mysql::Host_access[cinder_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `cinder`.* TO 'cinder'@'%''", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_grant[cinder@%/cinder.*]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_%]/Mysql_grant[cinder@%/cinder.*]: The container Openstacklib::Db::Mysql::Host_access[cinder_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[cinder_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[cinder_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'cinder'@'172.17.1.16' IDENTIFIED BY PASSWORD '*C7DDB409A9EA01E431C00367101D63B6AD27E2B2''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.16' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.16' REQUIRE NONE'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.16]/Mysql_user[cinder@172.17.1.16]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.16]/Mysql_user[cinder@172.17.1.16]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.16] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `cinder`.* TO 'cinder'@'172.17.1.16''", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.16]/Mysql_grant[cinder@172.17.1.16/cinder.*]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.16]/Mysql_grant[cinder@172.17.1.16/cinder.*]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.16] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.16]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.16]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'cinder'@'172.17.1.17' IDENTIFIED BY PASSWORD '*C7DDB409A9EA01E431C00367101D63B6AD27E2B2''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'cinder'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]/Mysql_user[cinder@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]/Mysql_user[cinder@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `cinder`.* TO 'cinder'@'172.17.1.17''", > "Notice: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]/Mysql_grant[cinder@172.17.1.17/cinder.*]/ensure: created", > "Debug: /Stage[main]/Cinder::Db::Mysql/Openstacklib::Db::Mysql[cinder]/Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]/Mysql_grant[cinder@172.17.1.17/cinder.*]: The container Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[cinder_172.17.1.17]", > "Info: Openstacklib::Db::Mysql[cinder]: Unscheduling all events on Openstacklib::Db::Mysql[cinder]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'glance'@'%' IDENTIFIED BY PASSWORD '*5E8E2357DF160351CDDC370B85407DB475AE5C5E''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_user[glance@%]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_user[glance@%]: The container Openstacklib::Db::Mysql::Host_access[glance_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `glance`.* TO 'glance'@'%''", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_grant[glance@%/glance.*]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_%]/Mysql_grant[glance@%/glance.*]: The container Openstacklib::Db::Mysql::Host_access[glance_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[glance_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[glance_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'glance'@'172.17.1.16' IDENTIFIED BY PASSWORD '*5E8E2357DF160351CDDC370B85407DB475AE5C5E''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.16' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.16' REQUIRE NONE'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.16]/Mysql_user[glance@172.17.1.16]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.16]/Mysql_user[glance@172.17.1.16]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.16] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `glance`.* TO 'glance'@'172.17.1.16''", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.16]/Mysql_grant[glance@172.17.1.16/glance.*]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.16]/Mysql_grant[glance@172.17.1.16/glance.*]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.16] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.16]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[glance_172.17.1.16]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'glance'@'172.17.1.17' IDENTIFIED BY PASSWORD '*5E8E2357DF160351CDDC370B85407DB475AE5C5E''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'glance'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]/Mysql_user[glance@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]/Mysql_user[glance@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `glance`.* TO 'glance'@'172.17.1.17''", > "Notice: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]/Mysql_grant[glance@172.17.1.17/glance.*]/ensure: created", > "Debug: /Stage[main]/Glance::Db::Mysql/Openstacklib::Db::Mysql[glance]/Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]/Mysql_grant[glance@172.17.1.17/glance.*]: The container Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[glance_172.17.1.17]", > "Info: Openstacklib::Db::Mysql[glance]: Unscheduling all events on Openstacklib::Db::Mysql[glance]", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Glance::Deps/Anchor[glance::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'gnocchi'@'%' IDENTIFIED BY PASSWORD '*1D7937B58A62AF923A804A73C66EEB61BF414C0F''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_user[gnocchi@%]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_user[gnocchi@%]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `gnocchi`.* TO 'gnocchi'@'%''", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_grant[gnocchi@%/gnocchi.*]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_%]/Mysql_grant[gnocchi@%/gnocchi.*]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[gnocchi_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[gnocchi_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'gnocchi'@'172.17.1.16' IDENTIFIED BY PASSWORD '*1D7937B58A62AF923A804A73C66EEB61BF414C0F''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.16' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.16' REQUIRE NONE'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.16]/Mysql_user[gnocchi@172.17.1.16]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.16]/Mysql_user[gnocchi@172.17.1.16]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.16] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `gnocchi`.* TO 'gnocchi'@'172.17.1.16''", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.16]/Mysql_grant[gnocchi@172.17.1.16/gnocchi.*]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.16]/Mysql_grant[gnocchi@172.17.1.16/gnocchi.*]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.16] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.16]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.16]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'gnocchi'@'172.17.1.17' IDENTIFIED BY PASSWORD '*1D7937B58A62AF923A804A73C66EEB61BF414C0F''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'gnocchi'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]/Mysql_user[gnocchi@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]/Mysql_user[gnocchi@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `gnocchi`.* TO 'gnocchi'@'172.17.1.17''", > "Notice: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]/Mysql_grant[gnocchi@172.17.1.17/gnocchi.*]/ensure: created", > "Debug: /Stage[main]/Gnocchi::Db::Mysql/Openstacklib::Db::Mysql[gnocchi]/Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]/Mysql_grant[gnocchi@172.17.1.17/gnocchi.*]: The container Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[gnocchi_172.17.1.17]", > "Info: Openstacklib::Db::Mysql[gnocchi]: Unscheduling all events on Openstacklib::Db::Mysql[gnocchi]", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'heat'@'%' IDENTIFIED BY PASSWORD '*42C85FE58C79A7971AA0E0553D51CEDE5A6A219B''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_user[heat@%]: The container Openstacklib::Db::Mysql::Host_access[heat_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `heat`.* TO 'heat'@'%''", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_grant[heat@%/heat.*]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_%]/Mysql_grant[heat@%/heat.*]: The container Openstacklib::Db::Mysql::Host_access[heat_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[heat_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[heat_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'heat'@'172.17.1.16' IDENTIFIED BY PASSWORD '*42C85FE58C79A7971AA0E0553D51CEDE5A6A219B''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.16' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.16' REQUIRE NONE'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.16]/Mysql_user[heat@172.17.1.16]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.16]/Mysql_user[heat@172.17.1.16]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.16] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `heat`.* TO 'heat'@'172.17.1.16''", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.16]/Mysql_grant[heat@172.17.1.16/heat.*]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.16]/Mysql_grant[heat@172.17.1.16/heat.*]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.16] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.16]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[heat_172.17.1.16]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'heat'@'172.17.1.17' IDENTIFIED BY PASSWORD '*42C85FE58C79A7971AA0E0553D51CEDE5A6A219B''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'heat'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]/Mysql_user[heat@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]/Mysql_user[heat@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `heat`.* TO 'heat'@'172.17.1.17''", > "Notice: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]/Mysql_grant[heat@172.17.1.17/heat.*]/ensure: created", > "Debug: /Stage[main]/Heat::Db::Mysql/Openstacklib::Db::Mysql[heat]/Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]/Mysql_grant[heat@172.17.1.17/heat.*]: The container Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[heat_172.17.1.17]", > "Info: Openstacklib::Db::Mysql[heat]: Unscheduling all events on Openstacklib::Db::Mysql[heat]", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'keystone'@'%' IDENTIFIED BY PASSWORD '*3CE966B7D46BE809909021981A3DB60EBB2B9672''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone@%]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone@%]: The container Openstacklib::Db::Mysql::Host_access[keystone_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `keystone`.* TO 'keystone'@'%''", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_grant[keystone@%/keystone.*]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_grant[keystone@%/keystone.*]: The container Openstacklib::Db::Mysql::Host_access[keystone_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[keystone_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[keystone_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'keystone'@'172.17.1.16' IDENTIFIED BY PASSWORD '*3CE966B7D46BE809909021981A3DB60EBB2B9672''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.16' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.16' REQUIRE NONE'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.16]/Mysql_user[keystone@172.17.1.16]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.16]/Mysql_user[keystone@172.17.1.16]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.16] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `keystone`.* TO 'keystone'@'172.17.1.16''", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.16]/Mysql_grant[keystone@172.17.1.16/keystone.*]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.16]/Mysql_grant[keystone@172.17.1.16/keystone.*]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.16] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.16]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.16]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'keystone'@'172.17.1.17' IDENTIFIED BY PASSWORD '*3CE966B7D46BE809909021981A3DB60EBB2B9672''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'keystone'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]/Mysql_user[keystone@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]/Mysql_user[keystone@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `keystone`.* TO 'keystone'@'172.17.1.17''", > "Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]/Mysql_grant[keystone@172.17.1.17/keystone.*]/ensure: created", > "Debug: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]/Mysql_grant[keystone@172.17.1.17/keystone.*]: The container Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[keystone_172.17.1.17]", > "Info: Openstacklib::Db::Mysql[keystone]: Unscheduling all events on Openstacklib::Db::Mysql[keystone]", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'neutron'@'%' IDENTIFIED BY PASSWORD '*927F9F7B2B0AE310D9A7287E3C64004FE5F2BC1A''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_user[neutron@%]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_user[neutron@%]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `ovs_neutron`.* TO 'neutron'@'%''", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_grant[neutron@%/ovs_neutron.*]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]/Mysql_grant[neutron@%/ovs_neutron.*]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[ovs_neutron_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'neutron'@'172.17.1.16' IDENTIFIED BY PASSWORD '*927F9F7B2B0AE310D9A7287E3C64004FE5F2BC1A''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.16' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.16' REQUIRE NONE'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.16]/Mysql_user[neutron@172.17.1.16]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.16]/Mysql_user[neutron@172.17.1.16]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.16] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `ovs_neutron`.* TO 'neutron'@'172.17.1.16''", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.16]/Mysql_grant[neutron@172.17.1.16/ovs_neutron.*]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.16]/Mysql_grant[neutron@172.17.1.16/ovs_neutron.*]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.16] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.16]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.16]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'neutron'@'172.17.1.17' IDENTIFIED BY PASSWORD '*927F9F7B2B0AE310D9A7287E3C64004FE5F2BC1A''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'neutron'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]/Mysql_user[neutron@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]/Mysql_user[neutron@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `ovs_neutron`.* TO 'neutron'@'172.17.1.17''", > "Notice: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]/Mysql_grant[neutron@172.17.1.17/ovs_neutron.*]/ensure: created", > "Debug: /Stage[main]/Neutron::Db::Mysql/Openstacklib::Db::Mysql[neutron]/Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]/Mysql_grant[neutron@172.17.1.17/ovs_neutron.*]: The container Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[ovs_neutron_172.17.1.17]", > "Info: Openstacklib::Db::Mysql[neutron]: Unscheduling all events on Openstacklib::Db::Mysql[neutron]", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Neutron::Deps/Anchor[neutron::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova'@'%' IDENTIFIED BY PASSWORD '*F4DB0C7B50CB75643D8E9A7984686F9F61EB697C''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_user[nova@%]: The container Openstacklib::Db::Mysql::Host_access[nova_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_grant[nova@%/nova.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_%]/Mysql_grant[nova@%/nova.*]: The container Openstacklib::Db::Mysql::Host_access[nova_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova'@'172.17.1.16' IDENTIFIED BY PASSWORD '*F4DB0C7B50CB75643D8E9A7984686F9F61EB697C''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.16' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.16' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.16]/Mysql_user[nova@172.17.1.16]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.16]/Mysql_user[nova@172.17.1.16]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.16] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'172.17.1.16''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.16]/Mysql_grant[nova@172.17.1.16/nova.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.16]/Mysql_grant[nova@172.17.1.16/nova.*]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.16] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.16]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_172.17.1.16]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova'@'172.17.1.17' IDENTIFIED BY PASSWORD '*F4DB0C7B50CB75643D8E9A7984686F9F61EB697C''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]/Mysql_user[nova@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]/Mysql_user[nova@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'172.17.1.17''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]/Mysql_grant[nova@172.17.1.17/nova.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova]/Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]/Mysql_grant[nova@172.17.1.17/nova.*]: The container Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_172.17.1.17]", > "Info: Openstacklib::Db::Mysql[nova]: Unscheduling all events on Openstacklib::Db::Mysql[nova]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_%]/Mysql_grant[nova@%/nova_cell0.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_%]/Mysql_grant[nova@%/nova_cell0.*]: The container Openstacklib::Db::Mysql::Host_access[nova_cell0_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_cell0_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_cell0_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'172.17.1.16''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.16]/Mysql_grant[nova@172.17.1.16/nova_cell0.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.16]/Mysql_grant[nova@172.17.1.16/nova_cell0.*]: The container Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.16] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.16]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.16]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'172.17.1.17''", > "Notice: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.17]/Mysql_grant[nova@172.17.1.17/nova_cell0.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql/Openstacklib::Db::Mysql[nova_cell0]/Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.17]/Mysql_grant[nova@172.17.1.17/nova_cell0.*]: The container Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_cell0_172.17.1.17]", > "Info: Openstacklib::Db::Mysql[nova_cell0]: Unscheduling all events on Openstacklib::Db::Mysql[nova_cell0]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_api'@'%' IDENTIFIED BY PASSWORD '*F4DB0C7B50CB75643D8E9A7984686F9F61EB697C''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_user[nova_api@%]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_user[nova_api@%]: The container Openstacklib::Db::Mysql::Host_access[nova_api_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova_api'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_grant[nova_api@%/nova_api.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_%]/Mysql_grant[nova_api@%/nova_api.*]: The container Openstacklib::Db::Mysql::Host_access[nova_api_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_api_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_api_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_api'@'172.17.1.16' IDENTIFIED BY PASSWORD '*F4DB0C7B50CB75643D8E9A7984686F9F61EB697C''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.16' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.16' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.16]/Mysql_user[nova_api@172.17.1.16]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.16]/Mysql_user[nova_api@172.17.1.16]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.16] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova_api'@'172.17.1.16''", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.16]/Mysql_grant[nova_api@172.17.1.16/nova_api.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.16]/Mysql_grant[nova_api@172.17.1.16/nova_api.*]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.16] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.16]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.16]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_api'@'172.17.1.17' IDENTIFIED BY PASSWORD '*F4DB0C7B50CB75643D8E9A7984686F9F61EB697C''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_api'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]/Mysql_user[nova_api@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]/Mysql_user[nova_api@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova_api'@'172.17.1.17''", > "Notice: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]/Mysql_grant[nova_api@172.17.1.17/nova_api.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_api/Openstacklib::Db::Mysql[nova_api]/Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]/Mysql_grant[nova_api@172.17.1.17/nova_api.*]: The container Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_api_172.17.1.17]", > "Info: Openstacklib::Db::Mysql[nova_api]: Unscheduling all events on Openstacklib::Db::Mysql[nova_api]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_placement'@'%' IDENTIFIED BY PASSWORD '*F4DB0C7B50CB75643D8E9A7984686F9F61EB697C''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_user[nova_placement@%]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_user[nova_placement@%]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_placement`.* TO 'nova_placement'@'%''", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_grant[nova_placement@%/nova_placement.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_%]/Mysql_grant[nova_placement@%/nova_placement.*]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_placement_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_placement_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_placement'@'172.17.1.16' IDENTIFIED BY PASSWORD '*F4DB0C7B50CB75643D8E9A7984686F9F61EB697C''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.16' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.16' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.16]/Mysql_user[nova_placement@172.17.1.16]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.16]/Mysql_user[nova_placement@172.17.1.16]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.16] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_placement`.* TO 'nova_placement'@'172.17.1.16''", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.16]/Mysql_grant[nova_placement@172.17.1.16/nova_placement.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.16]/Mysql_grant[nova_placement@172.17.1.16/nova_placement.*]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.16] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.16]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.16]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'nova_placement'@'172.17.1.17' IDENTIFIED BY PASSWORD '*F4DB0C7B50CB75643D8E9A7984686F9F61EB697C''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'nova_placement'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]/Mysql_user[nova_placement@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]/Mysql_user[nova_placement@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `nova_placement`.* TO 'nova_placement'@'172.17.1.17''", > "Notice: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]/Mysql_grant[nova_placement@172.17.1.17/nova_placement.*]/ensure: created", > "Debug: /Stage[main]/Nova::Db::Mysql_placement/Openstacklib::Db::Mysql[nova_placement]/Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]/Mysql_grant[nova_placement@172.17.1.17/nova_placement.*]: The container Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[nova_placement_172.17.1.17]", > "Info: Openstacklib::Db::Mysql[nova_placement]: Unscheduling all events on Openstacklib::Db::Mysql[nova_placement]", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync_api::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::cell_v2::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::db_online_data_migrations::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Nova::Deps/Anchor[nova::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'sahara'@'%' IDENTIFIED BY PASSWORD '*F2B619BDD20ECD3345D9A87E9D1086F54FF331BB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_user[sahara@%]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_user[sahara@%]: The container Openstacklib::Db::Mysql::Host_access[sahara_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `sahara`.* TO 'sahara'@'%''", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_grant[sahara@%/sahara.*]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_%]/Mysql_grant[sahara@%/sahara.*]: The container Openstacklib::Db::Mysql::Host_access[sahara_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[sahara_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[sahara_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'sahara'@'172.17.1.16' IDENTIFIED BY PASSWORD '*F2B619BDD20ECD3345D9A87E9D1086F54FF331BB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.16' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.16' REQUIRE NONE'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.16]/Mysql_user[sahara@172.17.1.16]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.16]/Mysql_user[sahara@172.17.1.16]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.16] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `sahara`.* TO 'sahara'@'172.17.1.16''", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.16]/Mysql_grant[sahara@172.17.1.16/sahara.*]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.16]/Mysql_grant[sahara@172.17.1.16/sahara.*]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.16] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.16]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.16]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'sahara'@'172.17.1.17' IDENTIFIED BY PASSWORD '*F2B619BDD20ECD3345D9A87E9D1086F54FF331BB''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'sahara'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]/Mysql_user[sahara@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]/Mysql_user[sahara@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `sahara`.* TO 'sahara'@'172.17.1.17''", > "Notice: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]/Mysql_grant[sahara@172.17.1.17/sahara.*]/ensure: created", > "Debug: /Stage[main]/Sahara::Db::Mysql/Openstacklib::Db::Mysql[sahara]/Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]/Mysql_grant[sahara@172.17.1.17/sahara.*]: The container Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[sahara_172.17.1.17]", > "Info: Openstacklib::Db::Mysql[sahara]: Unscheduling all events on Openstacklib::Db::Mysql[sahara]", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Sahara::Deps/Anchor[sahara::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'panko'@'%' IDENTIFIED BY PASSWORD '*BD8310267E41F13D6488E52B9E6E1ABCEC5E242A''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'%' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'%' REQUIRE NONE'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_user[panko@%]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_user[panko@%]: The container Openstacklib::Db::Mysql::Host_access[panko_%] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `panko`.* TO 'panko'@'%''", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_grant[panko@%/panko.*]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_%]/Mysql_grant[panko@%/panko.*]: The container Openstacklib::Db::Mysql::Host_access[panko_%] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[panko_%]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[panko_%]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'panko'@'172.17.1.16' IDENTIFIED BY PASSWORD '*BD8310267E41F13D6488E52B9E6E1ABCEC5E242A''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.16' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.16' REQUIRE NONE'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.16]/Mysql_user[panko@172.17.1.16]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.16]/Mysql_user[panko@172.17.1.16]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.16] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `panko`.* TO 'panko'@'172.17.1.16''", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.16]/Mysql_grant[panko@172.17.1.16/panko.*]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.16]/Mysql_grant[panko@172.17.1.16/panko.*]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.16] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.16]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[panko_172.17.1.16]", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e CREATE USER 'panko'@'172.17.1.17' IDENTIFIED BY PASSWORD '*BD8310267E41F13D6488E52B9E6E1ABCEC5E242A''", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.17' WITH MAX_USER_CONNECTIONS 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_QUERIES_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0'", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT USAGE ON *.* TO 'panko'@'172.17.1.17' REQUIRE NONE'", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]/Mysql_user[panko@172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]/Mysql_user[panko@172.17.1.17]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17] will propagate my refresh event", > "Debug: Executing: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --database=mysql -e GRANT ALL PRIVILEGES ON `panko`.* TO 'panko'@'172.17.1.17''", > "Notice: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]/Mysql_grant[panko@172.17.1.17/panko.*]/ensure: created", > "Debug: /Stage[main]/Panko::Db::Mysql/Openstacklib::Db::Mysql[panko]/Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]/Mysql_grant[panko@172.17.1.17/panko.*]: The container Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17] will propagate my refresh event", > "Info: Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]: Unscheduling all events on Openstacklib::Db::Mysql::Host_access[panko_172.17.1.17]", > "Info: Openstacklib::Db::Mysql[panko]: Unscheduling all events on Openstacklib::Db::Mysql[panko]", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::dbsync::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Stage[main]/Panko::Deps/Anchor[panko::service::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation, galera_ready, mysql_database, mysql_grant, mysql_user", > "Debug: Finishing transaction 12151140", > "Debug: Stored state in 0.01 seconds", > "Notice: Applied catalog in 72.36 seconds", > " Total: 105", > " Success: 105", > " Changed: 105", > " Out of sync: 105", > " Skipped: 137", > " Total: 253", > " File: 0.10", > " Mysql database: 0.17", > " Mysql grant: 0.95", > " Mysql user: 1.29", > " Pcmk resource: 10.31", > " Last run: 1529673650", > " Pcmk bundle: 17.68", > " Exec: 31.92", > " Config retrieval: 4.77", > " Total: 75.96", > " Pcmk property: 8.77", > " Config: 1529673573", > "Debug: Finishing transaction 38062800", > "+ TAGS=file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle'", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle'", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp\", 133]:[\"unknown\", 1]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 103]:[\"unknown\", 1]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'aodh' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/db/mysql.pp\", 58]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 175]", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'glance' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'gnocchi' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'heat' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'keystone' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'sahara' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'panko' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp\", 43]:", > "stdout: Info: Loading facts", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.25 seconds", > "Info: Applying configuration version '1529673656'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[l3_haproxy_process_wrapper]/File[/var/lib/neutron/l3_haproxy_wrapper]/ensure: defined content as '{md5}e741722509854288f828fa66335ad134'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[l3_haproxy_process_wrapper]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[l3_haproxy_process_wrapper]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Keepalived[l3_keepalived]/File[/var/lib/neutron/keepalived_wrapper]/ensure: defined content as '{md5}3a8c0df398c10f053e45f2f2ea5ccd93'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Keepalived[l3_keepalived]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Keepalived[l3_keepalived]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Keepalived_state_change[l3_keepalived_state_change]/File[/var/lib/neutron/keepalived_state_change_wrapper]/ensure: defined content as '{md5}f72bfec5dc1c16b968223450454f78bf'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Keepalived_state_change[l3_keepalived_state_change]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Keepalived_state_change[l3_keepalived_state_change]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::L3_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Dibbler_client[l3_dibbler_daemon]/File[/var/lib/neutron/dibbler_wrapper]/ensure: defined content as '{md5}d8fd38ee59394a46ad9b984126f1e767'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Dibbler_client[l3_dibbler_daemon]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Dibbler_client[l3_dibbler_daemon]", > "Notice: Applied catalog in 0.02 seconds", > " Total: 4", > " Success: 4", > " Total: 11", > " Out of sync: 4", > " Changed: 4", > " Skipped: 7", > " File: 0.01", > " Config retrieval: 0.35", > " Total: 0.36", > " Last run: 1529673657", > " Config: 1529673656", > "stderr: + STEP=4", > "+ TAGS=file", > "+ CONFIG='include ::tripleo::profile::base::neutron::l3_agent_wrappers'", > "+ EXTRA_ARGS=", > "+ echo '{\"step\": 4}'", > "+ puppet apply --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file -e 'include ::tripleo::profile::base::neutron::l3_agent_wrappers'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.23 seconds", > "Info: Applying configuration version '1529673660'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::Dhcp_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Dnsmasq[dhcp_dnsmasq_process_wrapper]/File[/var/lib/neutron/dnsmasq_wrapper]/ensure: defined content as '{md5}bdbee777940c5f4b2d9089e50e2791f0'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Dnsmasq[dhcp_dnsmasq_process_wrapper]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Dnsmasq[dhcp_dnsmasq_process_wrapper]", > "Notice: /Stage[main]/Tripleo::Profile::Base::Neutron::Dhcp_agent_wrappers/Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[dhcp_haproxy_process_wrapper]/File[/var/lib/neutron/dhcp_haproxy_wrapper]/ensure: defined content as '{md5}d77797fffe398a35675248a492b97d14'", > "Info: Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[dhcp_haproxy_process_wrapper]: Unscheduling all events on Tripleo::Profile::Base::Neutron::Wrappers::Haproxy[dhcp_haproxy_process_wrapper]", > "Notice: Applied catalog in 0.01 seconds", > " Total: 2", > " Success: 2", > " Changed: 2", > " Out of sync: 2", > " Total: 9", > " File: 0.00", > " Config retrieval: 0.32", > " Total: 0.33", > " Last run: 1529673661", > " Config: 1529673660", > "+ CONFIG='include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'", > "+ puppet apply --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file -e 'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'", > "stderr: Error: unable to find resource 'redis-bundle'", > "stdout: 31fdb8d2da2f4401fd4ca2e6de67bc76a99c648473b33413609111a32fd5be24", > "stdout: 245b5bee2e5b23151d05fb5c9edff0ccd8b0b0d684bb607592d52ff9d3527b24", > "stdout: 51713312a49aac6739edc218b7d7082d36c494fe68b5086079503ea79ae3f22d", > "stderr: Error: unable to find resource 'haproxy-bundle'", > "Debug: Facter: value for ec2_public_ipv4 is still nil", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/redis_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::database::redis_bundle from tripleo/profile/pacemaker/database/redis_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::certificate_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::redis_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::redis_docker_control_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::redis_network in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::extra_config_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_tunnel_local_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_tunnel_base_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_bind_ip in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_fqdn in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::tls_proxy_port in JSON backend", > "Debug: hiera(): Looking up redis_certificate_specs in JSON backend", > "Debug: hiera(): Looking up redis_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::database::redis_bundle::control_port in JSON backend", > "Debug: hiera(): Looking up redis_network in JSON backend", > "Debug: hiera(): Looking up redis_file_limit in JSON backend", > "Debug: importing '/etc/puppet/modules/redis/manifests/init.pp' in environment production", > "Debug: Automatically imported redis from redis into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/params.pp' in environment production", > "Debug: Automatically imported redis::params from redis/params into production", > "Debug: hiera(): Looking up redis::activerehashing in JSON backend", > "Debug: hiera(): Looking up redis::aof_load_truncated in JSON backend", > "Debug: hiera(): Looking up redis::aof_rewrite_incremental_fsync in JSON backend", > "Debug: hiera(): Looking up redis::appendfilename in JSON backend", > "Debug: hiera(): Looking up redis::appendfsync in JSON backend", > "Debug: hiera(): Looking up redis::appendonly in JSON backend", > "Debug: hiera(): Looking up redis::auto_aof_rewrite_min_size in JSON backend", > "Debug: hiera(): Looking up redis::auto_aof_rewrite_percentage in JSON backend", > "Debug: hiera(): Looking up redis::bind in JSON backend", > "Debug: hiera(): Looking up redis::output_buffer_limit_slave in JSON backend", > "Debug: hiera(): Looking up redis::output_buffer_limit_pubsub in JSON backend", > "Debug: hiera(): Looking up redis::conf_template in JSON backend", > "Debug: hiera(): Looking up redis::config_dir in JSON backend", > "Debug: hiera(): Looking up redis::config_dir_mode in JSON backend", > "Debug: hiera(): Looking up redis::config_file in JSON backend", > "Debug: hiera(): Looking up redis::config_file_mode in JSON backend", > "Debug: hiera(): Looking up redis::config_file_orig in JSON backend", > "Debug: hiera(): Looking up redis::config_group in JSON backend", > "Debug: hiera(): Looking up redis::config_owner in JSON backend", > "Debug: hiera(): Looking up redis::daemonize in JSON backend", > "Debug: hiera(): Looking up redis::databases in JSON backend", > "Debug: hiera(): Looking up redis::default_install in JSON backend", > "Debug: hiera(): Looking up redis::dbfilename in JSON backend", > "Debug: hiera(): Looking up redis::extra_config_file in JSON backend", > "Debug: hiera(): Looking up redis::hash_max_ziplist_entries in JSON backend", > "Debug: hiera(): Looking up redis::hash_max_ziplist_value in JSON backend", > "Debug: hiera(): Looking up redis::hll_sparse_max_bytes in JSON backend", > "Debug: hiera(): Looking up redis::hz in JSON backend", > "Debug: hiera(): Looking up redis::latency_monitor_threshold in JSON backend", > "Debug: hiera(): Looking up redis::list_max_ziplist_entries in JSON backend", > "Debug: hiera(): Looking up redis::list_max_ziplist_value in JSON backend", > "Debug: hiera(): Looking up redis::log_dir in JSON backend", > "Debug: hiera(): Looking up redis::log_dir_mode in JSON backend", > "Debug: hiera(): Looking up redis::log_file in JSON backend", > "Debug: hiera(): Looking up redis::log_level in JSON backend", > "Debug: hiera(): Looking up redis::manage_package in JSON backend", > "Debug: hiera(): Looking up redis::manage_repo in JSON backend", > "Debug: hiera(): Looking up redis::masterauth in JSON backend", > "Debug: hiera(): Looking up redis::maxclients in JSON backend", > "Debug: hiera(): Looking up redis::maxmemory in JSON backend", > "Debug: hiera(): Looking up redis::maxmemory_policy in JSON backend", > "Debug: hiera(): Looking up redis::maxmemory_samples in JSON backend", > "Debug: hiera(): Looking up redis::min_slaves_max_lag in JSON backend", > "Debug: hiera(): Looking up redis::min_slaves_to_write in JSON backend", > "Debug: hiera(): Looking up redis::no_appendfsync_on_rewrite in JSON backend", > "Debug: hiera(): Looking up redis::notify_keyspace_events in JSON backend", > "Debug: hiera(): Looking up redis::notify_service in JSON backend", > "Debug: hiera(): Looking up redis::managed_by_cluster_manager in JSON backend", > "Debug: hiera(): Looking up redis::package_ensure in JSON backend", > "Debug: hiera(): Looking up redis::package_name in JSON backend", > "Debug: hiera(): Looking up redis::pid_file in JSON backend", > "Debug: hiera(): Looking up redis::port in JSON backend", > "Debug: hiera(): Looking up redis::protected_mode in JSON backend", > "Debug: hiera(): Looking up redis::ppa_repo in JSON backend", > "Debug: hiera(): Looking up redis::rdbcompression in JSON backend", > "Debug: hiera(): Looking up redis::repl_backlog_size in JSON backend", > "Debug: hiera(): Looking up redis::repl_backlog_ttl in JSON backend", > "Debug: hiera(): Looking up redis::repl_disable_tcp_nodelay in JSON backend", > "Debug: hiera(): Looking up redis::repl_ping_slave_period in JSON backend", > "Debug: hiera(): Looking up redis::repl_timeout in JSON backend", > "Debug: hiera(): Looking up redis::requirepass in JSON backend", > "Debug: hiera(): Looking up redis::save_db_to_disk in JSON backend", > "Debug: hiera(): Looking up redis::save_db_to_disk_interval in JSON backend", > "Debug: hiera(): Looking up redis::service_enable in JSON backend", > "Debug: hiera(): Looking up redis::service_ensure in JSON backend", > "Debug: hiera(): Looking up redis::service_group in JSON backend", > "Debug: hiera(): Looking up redis::service_hasrestart in JSON backend", > "Debug: hiera(): Looking up redis::service_hasstatus in JSON backend", > "Debug: hiera(): Looking up redis::service_manage in JSON backend", > "Debug: hiera(): Looking up redis::service_name in JSON backend", > "Debug: hiera(): Looking up redis::service_provider in JSON backend", > "Debug: hiera(): Looking up redis::service_user in JSON backend", > "Debug: hiera(): Looking up redis::set_max_intset_entries in JSON backend", > "Debug: hiera(): Looking up redis::slave_priority in JSON backend", > "Debug: hiera(): Looking up redis::slave_read_only in JSON backend", > "Debug: hiera(): Looking up redis::slave_serve_stale_data in JSON backend", > "Debug: hiera(): Looking up redis::slaveof in JSON backend", > "Debug: hiera(): Looking up redis::slowlog_log_slower_than in JSON backend", > "Debug: hiera(): Looking up redis::slowlog_max_len in JSON backend", > "Debug: hiera(): Looking up redis::stop_writes_on_bgsave_error in JSON backend", > "Debug: hiera(): Looking up redis::syslog_enabled in JSON backend", > "Debug: hiera(): Looking up redis::syslog_facility in JSON backend", > "Debug: hiera(): Looking up redis::tcp_backlog in JSON backend", > "Debug: hiera(): Looking up redis::tcp_keepalive in JSON backend", > "Debug: hiera(): Looking up redis::timeout in JSON backend", > "Debug: hiera(): Looking up redis::unixsocket in JSON backend", > "Debug: hiera(): Looking up redis::unixsocketperm in JSON backend", > "Debug: hiera(): Looking up redis::ulimit in JSON backend", > "Debug: hiera(): Looking up redis::workdir in JSON backend", > "Debug: hiera(): Looking up redis::workdir_mode in JSON backend", > "Debug: hiera(): Looking up redis::zset_max_ziplist_entries in JSON backend", > "Debug: hiera(): Looking up redis::zset_max_ziplist_value in JSON backend", > "Debug: hiera(): Looking up redis::cluster_enabled in JSON backend", > "Debug: hiera(): Looking up redis::cluster_config_file in JSON backend", > "Debug: hiera(): Looking up redis::cluster_node_timeout in JSON backend", > "Debug: importing '/etc/puppet/modules/redis/manifests/preinstall.pp' in environment production", > "Debug: Automatically imported redis::preinstall from redis/preinstall into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/install.pp' in environment production", > "Debug: Automatically imported redis::install from redis/install into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/config.pp' in environment production", > "Debug: Automatically imported redis::config from redis/config into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/instance.pp' in environment production", > "Debug: Automatically imported redis::instance from redis/instance into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/ulimit.pp' in environment production", > "Debug: Automatically imported redis::ulimit from redis/ulimit into production", > "Debug: importing '/etc/puppet/modules/redis/manifests/service.pp' in environment production", > "Debug: Automatically imported redis::service from redis/service into production", > "Debug: hiera(): Looking up redis_short_node_names in JSON backend", > "Debug: Scope(Redis::Instance[default]): Retrieving template redis/redis.conf.3.2.erb", > "Debug: template[/etc/puppet/modules/redis/templates/redis.conf.3.2.erb]: Bound template variables for /etc/puppet/modules/redis/templates/redis.conf.3.2.erb in 0.01 seconds", > "Debug: template[/etc/puppet/modules/redis/templates/redis.conf.3.2.erb]: Interpolated template /etc/puppet/modules/redis/templates/redis.conf.3.2.erb in 0.01 seconds", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[redis] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-redis-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[redis-bundle] with 'before'", > "Debug: Adding relationship from Class[Redis::Preinstall] to Class[Redis::Install] with 'before'", > "Debug: Adding relationship from Class[Redis::Install] to Class[Redis::Config] with 'before'", > "Debug: File[/etc/redis]: Adding default for owner", > "Debug: File[/etc/redis]: Adding default for group", > "Debug: File[/etc/systemd/system/redis.service.d/]: Adding default for mode", > "Debug: File[/etc/redis.conf.puppet]: Adding default for owner", > "Debug: File[/etc/redis.conf.puppet]: Adding default for group", > "Debug: File[/etc/redis.conf.puppet]: Adding default for mode", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.49 seconds", > "Info: Applying configuration version '1529673667'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[redis]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-redis-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[redis-bundle]", > "Debug: /Stage[main]/Redis::Preinstall/before: subscribes to Class[Redis::Install]", > "Debug: /Stage[main]/Redis::Install/before: subscribes to Class[Redis::Config]", > "Debug: /Stage[main]/Redis::Ulimit/Augeas[Systemd redis ulimit]/notify: subscribes to Exec[systemd-reload-redis]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Property[redis-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[redis-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Ocf[redis]/require: subscribes to Pacemaker::Resource::Bundle[redis-bundle]", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]/subscribe: subscribes to File[/etc/redis.conf.puppet]", > "Debug: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]: Adding autorequire relationship with File[/etc/systemd/system/redis.service.d/]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Redis_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Database::Redis_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Preinstall]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Preinstall]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Install]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Redis::Install/Package[redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Redis::Install/Package[redis]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Config]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Redis::Config/File[/etc/redis]/ensure: created", > "Debug: /Stage[main]/Redis::Config/File[/etc/redis]: The container Class[Redis::Config] will propagate my refresh event", > "Notice: /Stage[main]/Redis::Config/File[/var/log/redis]/mode: mode changed '0750' to '0755'", > "Debug: /Stage[main]/Redis::Config/File[/var/log/redis]: The container Class[Redis::Config] will propagate my refresh event", > "Notice: /Stage[main]/Redis::Config/File[/var/lib/redis]/mode: mode changed '0750' to '0755'", > "Debug: /Stage[main]/Redis::Config/File[/var/lib/redis]: The container Class[Redis::Config] will propagate my refresh event", > "Debug: Redis::Instance[default]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Redis::Instance[default]: Resource is being skipped, unscheduling all events", > "Debug: Class[Redis::Ulimit]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Ulimit]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]/ensure: defined content as '{md5}a2f723773964f5ea42b6c7c5d6b72208'", > "Debug: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]: The container Class[Redis::Ulimit] will propagate my refresh event", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]/mode: mode changed '0644' to '0444'", > "Debug: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]: The container Class[Redis::Ulimit] will propagate my refresh event", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Opening augeas with root /, lens path , flags 64", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Augeas version 1.4.0 is installed", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Will attempt to save and only run if files changed", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): sending command 'defnode' with params [\"nofile\", \"/etc/systemd/system/redis.service.d/limits.conf/Service/LimitNOFILE\", \"\"]", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): sending command 'set' with params [\"$nofile/value\", \"10240\"]", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Skipping because no files were changed", > "Debug: Augeas[Systemd redis ulimit](provider=augeas): Closed the augeas connection", > "Info: Class[Redis::Ulimit]: Unscheduling all events on Class[Redis::Ulimit]", > "Debug: Class[Redis::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Redis::Service]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Redis/Exec[systemd-reload-redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Redis/Exec[systemd-reload-redis]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[redis-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[redis-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1eb5n0x returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1eb5n0x property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]/ensure: defined content as '{md5}94de54ece28c930b89fefe1be0a08a8f'", > "Info: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]: Scheduling refresh of Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]: The container Redis::Instance[default] will propagate my refresh event", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Resource is being skipped, unscheduling all events", > "Info: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Unscheduling all events on Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]", > "Info: Redis::Instance[default]: Unscheduling all events on Redis::Instance[default]", > "Info: Class[Redis::Config]: Unscheduling all events on Class[Redis::Config]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-17wn7pr returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-17wn7pr property show | grep redis-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep redis-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1neihg9 returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1neihg9 property set --node controller-0 redis-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1neihg9 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1neihg9.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 redis-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Property[redis-role-controller-0]/Pcmk_property[property-controller-0-redis-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Property[redis-role-controller-0]/Pcmk_property[property-controller-0-redis-role]: The container Pacemaker::Property[redis-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[redis-role-controller-0]: Unscheduling all events on Pacemaker::Property[redis-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[redis-bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Bundle[redis-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-fgku6a returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-fgku6a constraint list | grep location-redis-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-111ay02 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-111ay02 resource show redis-bundle > /dev/null 2>&1", > "Debug: Exists: bundle redis-bundle exists 1 location exists 1 deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-17iw2j2 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-17iw2j2 resource bundle create redis-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest replicas=1 masters=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=redis-cfg-files source-dir=/var/lib/kolla/config_files/redis.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=redis-cfg-data-redis source-dir=/var/lib/config-data/puppet-generated/redis/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=redis-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=redis-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=redis-lib source-dir=/var/lib/redis target-dir=/var/lib/redis options=rw storage-map id=redis-log source-dir=/var/log/containers/redis target-dir=/var/log/redis options=rw storage-map id=redis-run source-dir=/var/run/redis target-dir=/var/run/redis options=rw storage-map id=redis-pki-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=redis-pki-ca-bundle-crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=redis-pki-ca-bundle-trust-crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=redis-pki-cert source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=redis-dev-log source-dir=/dev/log target-dir=/dev/log options=rw network control-port=3124 --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-17iw2j2 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-17iw2j2.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location redis-bundle rule resource-discovery=exclusive score=0 redis-role eq true", > "Debug: location_rule_create: constraint location redis-bundle rule resource-discovery=exclusive score=0 redis-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-19bapnv returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-19bapnv constraint location redis-bundle rule resource-discovery=exclusive score=0 redis-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-19bapnv diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-19bapnv.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-kb6u3o returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-kb6u3o resource enable redis-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-kb6u3o diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-kb6u3o.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Bundle[redis-bundle]/Pcmk_bundle[redis-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Bundle[redis-bundle]/Pcmk_bundle[redis-bundle]: The container Pacemaker::Resource::Bundle[redis-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[redis-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[redis-bundle]", > "Debug: Pacemaker::Resource::Ocf[redis]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ocf[redis]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-hamdck returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-hamdck constraint list | grep location-redis-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-8ht3ho returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-8ht3ho resource show redis > /dev/null 2>&1", > "Debug: Exists: resource redis exists 1 location exists 0 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1s4uub4 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1s4uub4 resource create redis ocf:heartbeat:redis wait_last_known_master=true meta notify=true ordered=true interleave=true container-attribute-target=host op start timeout=200s stop timeout=200s bundle redis-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1s4uub4 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1s4uub4.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Ocf[redis]/Pcmk_resource[redis]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Redis_bundle/Pacemaker::Resource::Ocf[redis]/Pcmk_resource[redis]: The container Pacemaker::Resource::Ocf[redis] will propagate my refresh event", > "Info: Pacemaker::Resource::Ocf[redis]: Unscheduling all events on Pacemaker::Resource::Ocf[redis]", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Finishing transaction 36463280", > "Notice: Applied catalog in 36.07 seconds", > " Total: 13", > " Success: 13", > " Changed: 13", > " Out of sync: 13", > " Skipped: 25", > " Total: 42", > " Augeas: 0.01", > " File: 0.02", > " Config retrieval: 1.63", > " Last run: 1529673705", > " Pcmk bundle: 17.53", > " Total: 37.48", > " Pcmk property: 8.35", > " Pcmk resource: 9.94", > " Config: 1529673667", > "Debug: Finishing transaction 42108060", > "+ TAGS=file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle'", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle'", > "Warning: ModuleLoader: module 'redis' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/haproxy_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::haproxy_bundle from tripleo/profile/pacemaker/haproxy_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::haproxy_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::enable_load_balancer in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::ca_bundle in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::crl_file in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::internal_certs_directory in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::internal_keys_directory in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::deployed_ssl_cert_path in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::haproxy_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up haproxy_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up enable_load_balancer in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ca_bundle in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::crl_file in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::service_certificate in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/haproxy.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::haproxy from tripleo/profile/base/haproxy into production", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::certificates_specs in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::enable_load_balancer in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::manage_firewall in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::haproxy::step in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::manage_firewall in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy from tripleo/haproxy into production", > "Debug: hiera(): Looking up tripleo::haproxy::controller_virtual_ip in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::public_virtual_ip in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_service_manage in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_global_maxconn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_default_maxconn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_default_timeout in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_listen_bind_param in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_member_options in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_log_address in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::activate_httplog in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_globals_override in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_defaults_override in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_daemon in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_socket_access_level in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats_user in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats_password in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::controller_hosts in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::controller_hosts_names in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::use_internal_certificates in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::enable_internal_tls in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ssl_cipher_suite in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ssl_options in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats_certificate in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_stats in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_admin in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_public in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::neutron in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::cinder in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::congress in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::manila in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::sahara in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::tacker in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::trove in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::glance_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_osapi in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_placement in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_metadata in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_novncproxy in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api_metadata in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::aodh in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::panko in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::barbican in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::gnocchi in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mistral in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::swift_proxy_server in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_cfn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::horizon in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic_inspector in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::octavia in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::designate in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::kubernetes_master in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql_clustercheck in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql_max_conn in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mysql_member_options in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::rabbitmq in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::etcd in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::docker_registry in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::redis in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::redis_password in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::midonet_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::zaqar_api in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ceph_rgw in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::opendaylight in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ovn_dbs in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ovn_dbs_manage_lb in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::zaqar_ws in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ui in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::aodh_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::barbican_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ceph_rgw_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::cinder_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::congress_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::designate_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::docker_registry_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::glance_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::gnocchi_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::heat_cfn_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::horizon_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic_inspector_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ironic_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::kubernetes_master_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_admin_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_public_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::manila_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::mistral_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::neutron_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_metadata_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_novncproxy_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_osapi_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::nova_placement_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::octavia_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::opendaylight_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::panko_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ovn_dbs_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::ec2_api_metadata_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::etcd_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::sahara_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::swift_proxy_server_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::tacker_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::trove_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::zaqar_api_network in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::service_ports in JSON backend", > "Debug: hiera(): Looking up controller_node_ips in JSON backend", > "Debug: hiera(): Looking up controller_node_names in JSON backend", > "Debug: hiera(): Looking up nova_vnc_proxy_enabled in JSON backend", > "Debug: hiera(): Looking up swift_proxy_enabled in JSON backend", > "Debug: hiera(): Looking up heat_api_enabled in JSON backend", > "Debug: hiera(): Looking up heat_api_cfn_enabled in JSON backend", > "Debug: hiera(): Looking up horizon_enabled in JSON backend", > "Debug: hiera(): Looking up mysql_enabled in JSON backend", > "Debug: hiera(): Looking up kubernetes_master_enabled in JSON backend", > "Debug: hiera(): Looking up etcd_enabled in JSON backend", > "Debug: hiera(): Looking up enable_docker_registry in JSON backend", > "Debug: hiera(): Looking up redis_enabled in JSON backend", > "Debug: hiera(): Looking up ceph_rgw_enabled in JSON backend", > "Debug: hiera(): Looking up opendaylight_api_enabled in JSON backend", > "Debug: hiera(): Looking up ovn_dbs_enabled in JSON backend", > "Debug: hiera(): Looking up tripleo_ui_enabled in JSON backend", > "Debug: hiera(): Looking up enable_ui in JSON backend", > "Debug: hiera(): Looking up aodh_api_network in JSON backend", > "Debug: hiera(): Looking up barbican_api_network in JSON backend", > "Debug: hiera(): Looking up ceph_rgw_network in JSON backend", > "Debug: hiera(): Looking up cinder_api_network in JSON backend", > "Debug: hiera(): Looking up congress_api_network in JSON backend", > "Debug: hiera(): Looking up designate_api_network in JSON backend", > "Debug: hiera(): Looking up docker_registry_network in JSON backend", > "Debug: hiera(): Looking up glance_api_network in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_network in JSON backend", > "Debug: hiera(): Looking up heat_api_network in JSON backend", > "Debug: hiera(): Looking up heat_api_cfn_network in JSON backend", > "Debug: hiera(): Looking up horizon_network in JSON backend", > "Debug: hiera(): Looking up ironic_inspector_network in JSON backend", > "Debug: hiera(): Looking up ironic_api_network in JSON backend", > "Debug: hiera(): Looking up kubernetes_master_network in JSON backend", > "Debug: hiera(): Looking up keystone_admin_api_network in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_network in JSON backend", > "Debug: hiera(): Looking up manila_api_network in JSON backend", > "Debug: hiera(): Looking up mistral_api_network in JSON backend", > "Debug: hiera(): Looking up neutron_api_network in JSON backend", > "Debug: hiera(): Looking up nova_api_network in JSON backend", > "Debug: hiera(): Looking up nova_vnc_proxy_network in JSON backend", > "Debug: hiera(): Looking up nova_placement_network in JSON backend", > "Debug: hiera(): Looking up octavia_api_network in JSON backend", > "Debug: hiera(): Looking up opendaylight_api_network in JSON backend", > "Debug: hiera(): Looking up panko_api_network in JSON backend", > "Debug: hiera(): Looking up ovn_dbs_network in JSON backend", > "Debug: hiera(): Looking up ec2_api_network in JSON backend", > "Debug: hiera(): Looking up etcd_network in JSON backend", > "Debug: hiera(): Looking up sahara_api_network in JSON backend", > "Debug: hiera(): Looking up swift_proxy_network in JSON backend", > "Debug: hiera(): Looking up tacker_api_network in JSON backend", > "Debug: hiera(): Looking up trove_api_network in JSON backend", > "Debug: hiera(): Looking up zaqar_api_network in JSON backend", > "Debug: hiera(): Looking up mysql_vip in JSON backend", > "Debug: hiera(): Looking up rabbitmq_vip in JSON backend", > "Debug: hiera(): Looking up redis_vip in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/init.pp' in environment production", > "Debug: Automatically imported haproxy from haproxy into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/params.pp' in environment production", > "Debug: Automatically imported haproxy::params from haproxy/params into production", > "Debug: hiera(): Looking up haproxy::package_ensure in JSON backend", > "Debug: hiera(): Looking up haproxy::package_name in JSON backend", > "Debug: hiera(): Looking up haproxy::service_ensure in JSON backend", > "Debug: hiera(): Looking up haproxy::service_options in JSON backend", > "Debug: hiera(): Looking up haproxy::sysconfig_options in JSON backend", > "Debug: hiera(): Looking up haproxy::merge_options in JSON backend", > "Debug: hiera(): Looking up haproxy::restart_command in JSON backend", > "Debug: hiera(): Looking up haproxy::custom_fragment in JSON backend", > "Debug: hiera(): Looking up haproxy::config_dir in JSON backend", > "Debug: hiera(): Looking up haproxy::config_file in JSON backend", > "Debug: hiera(): Looking up haproxy::manage_config_dir in JSON backend", > "Debug: hiera(): Looking up haproxy::config_validate_cmd in JSON backend", > "Debug: hiera(): Looking up haproxy::manage_service in JSON backend", > "Debug: hiera(): Looking up haproxy::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/instance.pp' in environment production", > "Debug: Automatically imported haproxy::instance from haproxy/instance into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/endpoint.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::endpoint from tripleo/haproxy/endpoint into production", > "Debug: hiera(): Looking up enabled_services in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/service_endpoints.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::service_endpoints from tripleo/haproxy/service_endpoints into production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/stats.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::stats from tripleo/haproxy/stats into production", > "Debug: hiera(): Looking up tripleo::haproxy::stats::certificate in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/listen.pp' in environment production", > "Debug: Automatically imported haproxy::listen from haproxy/listen into production", > "Debug: hiera(): Looking up keystone_admin_api_vip in JSON backend", > "Debug: hiera(): Looking up keystone_admin_api_node_ips in JSON backend", > "Debug: hiera(): Looking up keystone_admin_api_node_names in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_vip in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_node_ips in JSON backend", > "Debug: hiera(): Looking up keystone_public_api_node_names in JSON backend", > "Debug: hiera(): Looking up neutron_api_vip in JSON backend", > "Debug: hiera(): Looking up neutron_api_node_ips in JSON backend", > "Debug: hiera(): Looking up neutron_api_node_names in JSON backend", > "Debug: hiera(): Looking up cinder_api_vip in JSON backend", > "Debug: hiera(): Looking up cinder_api_node_ips in JSON backend", > "Debug: hiera(): Looking up cinder_api_node_names in JSON backend", > "Debug: hiera(): Looking up sahara_api_vip in JSON backend", > "Debug: hiera(): Looking up sahara_api_node_ips in JSON backend", > "Debug: hiera(): Looking up sahara_api_node_names in JSON backend", > "Debug: hiera(): Looking up glance_api_vip in JSON backend", > "Debug: hiera(): Looking up glance_api_node_ips in JSON backend", > "Debug: hiera(): Looking up glance_api_node_names in JSON backend", > "Debug: hiera(): Looking up nova_api_vip in JSON backend", > "Debug: hiera(): Looking up nova_api_node_ips in JSON backend", > "Debug: hiera(): Looking up nova_api_node_names in JSON backend", > "Debug: hiera(): Looking up nova_placement_vip in JSON backend", > "Debug: hiera(): Looking up nova_placement_node_ips in JSON backend", > "Debug: hiera(): Looking up nova_placement_node_names in JSON backend", > "Debug: hiera(): Looking up nova_metadata_vip in JSON backend", > "Debug: hiera(): Looking up nova_metadata_node_ips in JSON backend", > "Debug: hiera(): Looking up nova_metadata_node_names in JSON backend", > "Debug: hiera(): Looking up aodh_api_vip in JSON backend", > "Debug: hiera(): Looking up aodh_api_node_ips in JSON backend", > "Debug: hiera(): Looking up aodh_api_node_names in JSON backend", > "Debug: hiera(): Looking up panko_api_vip in JSON backend", > "Debug: hiera(): Looking up panko_api_node_ips in JSON backend", > "Debug: hiera(): Looking up panko_api_node_names in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_vip in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_node_ips in JSON backend", > "Debug: hiera(): Looking up gnocchi_api_node_names in JSON backend", > "Debug: hiera(): Looking up swift_proxy_vip in JSON backend", > "Debug: hiera(): Looking up swift_proxy_node_ips in JSON backend", > "Debug: hiera(): Looking up swift_proxy_node_names in JSON backend", > "Debug: hiera(): Looking up heat_api_vip in JSON backend", > "Debug: hiera(): Looking up heat_api_node_ips in JSON backend", > "Debug: hiera(): Looking up heat_api_node_names in JSON backend", > "Debug: hiera(): Looking up horizon_vip in JSON backend", > "Debug: hiera(): Looking up horizon_node_ips in JSON backend", > "Debug: hiera(): Looking up horizon_node_names in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/haproxy/horizon_endpoint.pp' in environment production", > "Debug: Automatically imported tripleo::haproxy::horizon_endpoint from tripleo/haproxy/horizon_endpoint into production", > "Debug: hiera(): Looking up tripleo::haproxy::horizon_endpoint::public_certificate in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::horizon::options in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/balancermember.pp' in environment production", > "Debug: Automatically imported haproxy::balancermember from haproxy/balancermember into production", > "Debug: hiera(): Looking up mysql_node_ips in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall.pp' in environment production", > "Debug: Automatically imported tripleo::firewall from tripleo/firewall into production", > "Debug: hiera(): Looking up tripleo::firewall::firewall_chains in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::purge_firewall_chains in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::purge_firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_pre_extras in JSON backend", > "Debug: hiera(): Looking up tripleo::firewall::firewall_post_extras in JSON backend", > "Debug: Resource class[tripleo::firewall::pre] was not determined to be defined", > "Debug: Create new resource class[tripleo::firewall::pre] with params {\"firewall_settings\"=>{}}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/pre.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::pre from tripleo/firewall/pre into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/init.pp' in environment production", > "Debug: Automatically imported firewall from firewall into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/params.pp' in environment production", > "Debug: Automatically imported firewall::params from firewall/params into production", > "Debug: hiera(): Looking up firewall::ensure in JSON backend", > "Debug: hiera(): Looking up firewall::ensure_v6 in JSON backend", > "Debug: hiera(): Looking up firewall::pkg_ensure in JSON backend", > "Debug: hiera(): Looking up firewall::service_name in JSON backend", > "Debug: hiera(): Looking up firewall::service_name_v6 in JSON backend", > "Debug: hiera(): Looking up firewall::package_name in JSON backend", > "Debug: hiera(): Looking up firewall::ebtables_manage in JSON backend", > "Debug: importing '/etc/puppet/modules/firewall/manifests/linux.pp' in environment production", > "Debug: Automatically imported firewall::linux from firewall/linux into production", > "Debug: importing '/etc/puppet/modules/firewall/manifests/linux/redhat.pp' in environment production", > "Debug: Automatically imported firewall::linux::redhat from firewall/linux/redhat into production", > "Debug: hiera(): Looking up firewall::linux::redhat::package_ensure in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/rule.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::rule from tripleo/firewall/rule into production", > "Debug: Resource class[tripleo::firewall::post] was not determined to be defined", > "Debug: Create new resource class[tripleo::firewall::post] with params {\"firewall_settings\"=>{}}", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/post.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::post from tripleo/firewall/post into production", > "Debug: hiera(): Looking up tripleo::firewall::post::debug in JSON backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Debug: hiera(): Looking up service_names in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/firewall/service_rules.pp' in environment production", > "Debug: Automatically imported tripleo::firewall::service_rules from tripleo/firewall/service_rules into production", > "Debug: hiera(): Looking up redis_node_ips in JSON backend", > "Debug: hiera(): Looking up redis_node_names in JSON backend", > "Debug: hiera(): Looking up midonet_cluster_vip in JSON backend", > "Debug: importing '/etc/puppet/modules/concat/manifests/init.pp' in environment production", > "Debug: Automatically imported concat from concat into production", > "Debug: hiera(): Looking up haproxy_short_node_names in JSON backend", > "Debug: hiera(): Looking up controller_virtual_ip in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp' in environment production", > "Debug: Automatically imported tripleo::pacemaker::haproxy_with_vip from tripleo/pacemaker/haproxy_with_vip into production", > "Debug: hiera(): Looking up public_virtual_ip in JSON backend", > "Debug: hiera(): Looking up network_virtual_ips in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/config.pp' in environment production", > "Debug: Automatically imported haproxy::config from haproxy/config into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/install.pp' in environment production", > "Debug: Automatically imported haproxy::install from haproxy/install into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/service.pp' in environment production", > "Debug: Automatically imported haproxy::service from haproxy/service into production", > "Debug: hiera(): Looking up tripleo.aodh_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.certmonger_user.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.certmonger_user.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::certmonger_user::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::certmonger_user::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_backup.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_backup.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_backup::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_backup::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_rpc.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_rpc.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_rpc::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_rpc::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_notify.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_notify.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_notify::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_notify::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_api.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_api.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_api::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_api::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_engine.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_engine.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_engine::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_engine::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_client.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_client.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_client::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_client::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_compute.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_compute.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_compute::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_compute::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_compute.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_compute.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_compute::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_compute::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_libvirt.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_libvirt.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_libvirt::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_libvirt::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_migration_target.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_migration_target.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_migration_target::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_migration_target::haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_osd.haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_osd.haproxy_userlists in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_osd::haproxy_endpoints in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_osd::haproxy_userlists in JSON backend", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/backend.pp' in environment production", > "Debug: Automatically imported haproxy::backend from haproxy/backend into production", > "Debug: importing '/etc/puppet/modules/haproxy/manifests/globals.pp' in environment production", > "Debug: Automatically imported haproxy::globals from haproxy/globals into production", > "Debug: hiera(): Looking up haproxy::globals::sort_options_alphabetic in JSON backend", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.04 seconds", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.07 seconds", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_mode.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/fragments/_mode.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_mode.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_mode.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[haproxy.stats]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_options.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/fragments/_options.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_options.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_options.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.08 seconds", > "Debug: importing '/etc/puppet/modules/concat/manifests/fragment.pp' in environment production", > "Debug: Automatically imported concat::fragment from concat/fragment into production", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_admin::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::keystone_public::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::neutron::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::cinder::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for member_options", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::sahara::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::glance_api::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_osapi::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_placement::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_metadata::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for member_options", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::nova_novncproxy::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::aodh::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::panko::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for listen_options", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::gnocchi::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::swift_proxy_server::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::heat_api::options in JSON backend", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for haproxy_listen_bind_param", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for public_certificate", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for use_internal_certificates", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for internal_certificates_specs", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Adding default for manage_firewall", > "Debug: hiera(): Looking up tripleo::haproxy::heat_cfn::options in JSON backend", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.06 seconds", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_mode.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_mode.erb in 0.06 seconds", > "Debug: Scope(Haproxy::Listen[horizon]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.13 seconds", > "Debug: Scope(Haproxy::Balancermember[horizon_172.17.1.16_controller-0.internalapi.localdomain]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_balancermember.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[mysql]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Balancermember[mysql-backup]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: hiera(): Looking up tripleo.aodh_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_evaluator.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_evaluator::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_listener.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_listener::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.aodh_notifier.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::aodh_notifier::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ca_certs.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ca_certs::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_central.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_central::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceilometer_agent_notification.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceilometer_agent_notification::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mgr.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mgr::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ceph_mon.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ceph_mon::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.certmonger_user.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::certmonger_user::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_backup.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_backup::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_scheduler.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_scheduler::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.cinder_volume.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::cinder_volume::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.clustercheck.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::clustercheck::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.docker.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::docker::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.glance_registry_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::glance_registry_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_metricd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_metricd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.gnocchi_statsd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::gnocchi_statsd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.haproxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::haproxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cloudwatch_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cloudwatch_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_api_cfn.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_api_cfn::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.heat_engine.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::heat_engine::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.horizon.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::horizon::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.iscsid.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::iscsid::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.kernel.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::kernel::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.keystone.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::keystone::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.memcached.firewall_rules in JSON backend", > "Debug: hiera(): Looking up memcached_network in JSON backend", > "Debug: hiera(): Looking up internal_api_subnet in JSON backend", > "Debug: hiera(): Looking up tripleo::memcached::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mongodb_disabled.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mongodb_disabled::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.mysql_client.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::mysql_client::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_plugin_ml2.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_plugin_ml2::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_dhcp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_dhcp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_l3.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_l3::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_metadata.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_metadata::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.neutron_ovs_agent.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::neutron_ovs_agent::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_conductor.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_conductor::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_consoleauth.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_consoleauth::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_metadata.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_metadata::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_placement.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_placement::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_scheduler.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_scheduler::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.nova_vnc_proxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::nova_vnc_proxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.ntp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::ntp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.logrotate_crond.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::logrotate_crond::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.pacemaker.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::pacemaker::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.panko_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::panko_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_rpc.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_rpc::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.oslo_messaging_notify.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::oslo_messaging_notify::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.redis.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::redis::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_api.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_api::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sahara_engine.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sahara_engine::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.snmp.firewall_rules in JSON backend", > "Debug: hiera(): Looking up snmpd_network in JSON backend", > "Debug: hiera(): Looking up tripleo::snmp::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.sshd.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::sshd::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_proxy.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_proxy::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_ringbuilder.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_ringbuilder::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.swift_storage.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::swift_storage::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.timezone.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::timezone::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_firewall.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_firewall::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tripleo_packages.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tripleo_packages::firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo.tuned.firewall_rules in JSON backend", > "Debug: hiera(): Looking up tripleo::tuned::firewall_rules in JSON backend", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/fragments/_bind.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/fragments/_bind.erb in 0.01 seconds", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[redis]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.02 seconds", > "Debug: Scope(Haproxy::Balancermember[redis]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: hiera(): Looking up haproxy_docker in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource/ip.pp' in environment production", > "Debug: Automatically imported pacemaker::resource::ip from pacemaker/resource/ip into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/constraint/order.pp' in environment production", > "Debug: Automatically imported pacemaker::constraint::order from pacemaker/constraint/order into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/constraint/colocation.pp' in environment production", > "Debug: Automatically imported pacemaker::constraint::colocation from pacemaker/constraint/colocation into production", > "Debug: Scope(Haproxy::Config[haproxy]): Retrieving template haproxy/haproxy-base.cfg.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb]: Bound template variables for /etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb in 0.00 seconds", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy-base.cfg.erb in 0.00 seconds", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[keystone_admin]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.01 seconds", > "Debug: Scope(Haproxy::Balancermember[keystone_admin]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[keystone_public]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[keystone_public]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[neutron]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[neutron]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[cinder]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[cinder]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[sahara]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[sahara]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[glance_api]): Retrieving template haproxy/fragments/_options.erb", > "Debug: template[/etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb]: Interpolated template /etc/puppet/modules/haproxy/templates/haproxy_listen_block.erb in 0.06 seconds", > "Debug: Scope(Haproxy::Balancermember[glance_api]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_osapi]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_osapi]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_placement]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_placement]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_metadata]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_metadata]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[nova_novncproxy]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[nova_novncproxy]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[aodh]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[aodh]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[panko]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[panko]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[gnocchi]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[gnocchi]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[swift_proxy_server]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[swift_proxy_server]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[heat_api]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[heat_api]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/haproxy_listen_block.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/fragments/_bind.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/fragments/_mode.erb", > "Debug: Scope(Haproxy::Listen[heat_cfn]): Retrieving template haproxy/fragments/_options.erb", > "Debug: Scope(Haproxy::Balancermember[heat_cfn]): Retrieving template haproxy/haproxy_balancermember.erb", > "Debug: hiera(): Looking up pacemaker::resource::ip::deep_compare in JSON backend", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-192.168.24.14-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-192.168.24.14-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-10.0.0.110-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-10.0.0.110-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.1.11-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.1.11-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.1.17-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.1.17-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.3.15-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.3.15-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[order-ip-172.17.4.15-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_constraint[colo-ip-172.17.4.15-haproxy-bundle] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-192.168.24.14] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-10.0.0.110] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.1.11] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.1.17] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.3.15] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_resource[ip-172.17.4.15] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-haproxy-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Class[Tripleo::Firewall::Pre] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[pcsd] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[corosync] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[pacemaker] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[firewalld] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[iptables] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Service[ip6tables] to Class[Tripleo::Firewall::Post] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[124 snmp ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[124 snmp ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[124 snmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[000 accept related established rules ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[001 accept all icmp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[002 accept all to lo interface ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[003 accept ssh ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[004 accept ipv6 dhcpv6 ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[998 log all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[999 drop all ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 mysql_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 redis_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_admin_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 keystone_public_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 neutron_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 cinder_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 sahara_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 glance_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_osapi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_placement_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_metadata_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 nova_novncproxy_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 aodh_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 panko_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 gnocchi_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 swift_proxy_server_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_api_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[100 heat_cfn_haproxy_ssl ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[128 aodh-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 ceph_mgr ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[110 ceph_mon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[119 cinder ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[120 iscsi initiator ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[112 glance_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[129 gnocchi-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 gnocchi-statsd ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[107 haproxy stats ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[125 heat_cfn ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[127 horizon ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[111 keystone ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[121 memcached ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[104 mysql galera-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[114 neutron api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[115 neutron dhcp input ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[116 neutron dhcp output ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[106 neutron_l3 vrrp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[118 neutron vxlan networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[136 neutron gre networks ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[113 nova_api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[138 nova_placement ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[137 nova_vnc_proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[105 ntp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[130 pacemaker tcp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[131 pacemaker udp ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[140 panko-api ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[109 rabbitmq-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[108 redis-bundle ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[132 sahara ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[124 snmp ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[122 swift proxy ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv4] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Firewall[123 swift storage ipv6] to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup] with 'before'", > "Debug: Adding relationship from Haproxy::Listen[haproxy.stats] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[horizon] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[mysql] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[redis] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[keystone_admin] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[keystone_public] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[neutron] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[cinder] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[sahara] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[glance_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_osapi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_placement] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_metadata] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[nova_novncproxy] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[aodh] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[panko] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[gnocchi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[swift_proxy_server] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[heat_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Listen[heat_cfn] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[horizon_172.17.1.16_controller-0.internalapi.localdomain] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[mysql-backup] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[redis] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[keystone_admin] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[keystone_public] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[neutron] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[cinder] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[sahara] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[glance_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_osapi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_placement] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_metadata] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[nova_novncproxy] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[aodh] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[panko] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[gnocchi] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[swift_proxy_server] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[heat_api] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Haproxy::Balancermember[heat_cfn] to Exec[haproxy-reload] with 'notify'", > "Debug: Adding relationship from Anchor[haproxy::haproxy::begin] to Haproxy::Install[haproxy] with 'before'", > "Debug: Adding relationship from Haproxy::Install[haproxy] to Haproxy::Config[haproxy] with 'before'", > "Debug: Adding relationship from Haproxy::Config[haproxy] to Haproxy::Service[haproxy] with 'notify'", > "Debug: Adding relationship from Haproxy::Service[haproxy] to Anchor[haproxy::haproxy::end] with 'before'", > "Debug: Adding relationship from File[/var/lib/tripleo/pacemaker-restarts] to Exec[haproxy-clone resource restart flag] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[control_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[control_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[control_vip-then-haproxy] to Pacemaker::Constraint::Colocation[control_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[public_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[public_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[public_vip-then-haproxy] to Pacemaker::Constraint::Colocation[public_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[redis_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[redis_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[redis_vip-then-haproxy] to Pacemaker::Constraint::Colocation[redis_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[internal_api_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[internal_api_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[internal_api_vip-then-haproxy] to Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[storage_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[storage_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[storage_vip-then-haproxy] to Pacemaker::Constraint::Colocation[storage_vip-with-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Ip[storage_mgmt_vip] to Pacemaker::Resource::Bundle[haproxy-bundle] with 'before'", > "Debug: Adding relationship from Pacemaker::Resource::Bundle[haproxy-bundle] to Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy] with 'before'", > "Debug: Adding relationship from Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy] to Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy] with 'before'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 5.72 seconds", > "Debug: /Firewall[000 accept related established rules ipv4]: [validate]", > "Debug: /Firewall[000 accept related established rules ipv6]: [validate]", > "Debug: /Firewall[001 accept all icmp ipv4]: [validate]", > "Debug: /Firewall[001 accept all icmp ipv6]: [validate]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: [validate]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: [validate]", > "Debug: /Firewall[003 accept ssh ipv4]: [validate]", > "Debug: /Firewall[003 accept ssh ipv6]: [validate]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: [validate]", > "Debug: /Firewall[998 log all ipv4]: [validate]", > "Debug: /Firewall[998 log all ipv6]: [validate]", > "Debug: /Firewall[999 drop all ipv4]: [validate]", > "Debug: /Firewall[999 drop all ipv6]: [validate]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 redis_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 redis_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 panko_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 panko_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: [validate]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: [validate]", > "Debug: /Firewall[128 aodh-api ipv4]: [validate]", > "Debug: /Firewall[128 aodh-api ipv6]: [validate]", > "Debug: /Firewall[113 ceph_mgr ipv4]: [validate]", > "Debug: /Firewall[113 ceph_mgr ipv6]: [validate]", > "Debug: /Firewall[110 ceph_mon ipv4]: [validate]", > "Debug: /Firewall[110 ceph_mon ipv6]: [validate]", > "Debug: /Firewall[119 cinder ipv4]: [validate]", > "Debug: /Firewall[119 cinder ipv6]: [validate]", > "Debug: /Firewall[120 iscsi initiator ipv4]: [validate]", > "Debug: /Firewall[120 iscsi initiator ipv6]: [validate]", > "Debug: /Firewall[112 glance_api ipv4]: [validate]", > "Debug: /Firewall[112 glance_api ipv6]: [validate]", > "Debug: /Firewall[129 gnocchi-api ipv4]: [validate]", > "Debug: /Firewall[129 gnocchi-api ipv6]: [validate]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: [validate]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: [validate]", > "Debug: /Firewall[107 haproxy stats ipv4]: [validate]", > "Debug: /Firewall[107 haproxy stats ipv6]: [validate]", > "Debug: /Firewall[125 heat_api ipv4]: [validate]", > "Debug: /Firewall[125 heat_api ipv6]: [validate]", > "Debug: /Firewall[125 heat_cfn ipv4]: [validate]", > "Debug: /Firewall[125 heat_cfn ipv6]: [validate]", > "Debug: /Firewall[127 horizon ipv4]: [validate]", > "Debug: /Firewall[127 horizon ipv6]: [validate]", > "Debug: /Firewall[111 keystone ipv4]: [validate]", > "Debug: /Firewall[111 keystone ipv6]: [validate]", > "Debug: /Firewall[121 memcached ipv4]: [validate]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: [validate]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: [validate]", > "Debug: /Firewall[114 neutron api ipv4]: [validate]", > "Debug: /Firewall[114 neutron api ipv6]: [validate]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: [validate]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: [validate]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: [validate]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: [validate]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: [validate]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: [validate]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: [validate]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: [validate]", > "Debug: /Firewall[136 neutron gre networks ipv4]: [validate]", > "Debug: /Firewall[136 neutron gre networks ipv6]: [validate]", > "Debug: /Firewall[113 nova_api ipv4]: [validate]", > "Debug: /Firewall[113 nova_api ipv6]: [validate]", > "Debug: /Firewall[138 nova_placement ipv4]: [validate]", > "Debug: /Firewall[138 nova_placement ipv6]: [validate]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: [validate]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: [validate]", > "Debug: /Firewall[105 ntp ipv4]: [validate]", > "Debug: /Firewall[105 ntp ipv6]: [validate]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: [validate]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: [validate]", > "Debug: /Firewall[131 pacemaker udp ipv4]: [validate]", > "Debug: /Firewall[131 pacemaker udp ipv6]: [validate]", > "Debug: /Firewall[140 panko-api ipv4]: [validate]", > "Debug: /Firewall[140 panko-api ipv6]: [validate]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: [validate]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: [validate]", > "Debug: /Firewall[108 redis-bundle ipv4]: [validate]", > "Debug: /Firewall[108 redis-bundle ipv6]: [validate]", > "Debug: /Firewall[132 sahara ipv4]: [validate]", > "Debug: /Firewall[132 sahara ipv6]: [validate]", > "Debug: /Firewall[124 snmp ipv4]: [validate]", > "Debug: /Firewall[122 swift proxy ipv4]: [validate]", > "Debug: /Firewall[122 swift proxy ipv6]: [validate]", > "Debug: /Firewall[123 swift storage ipv4]: [validate]", > "Debug: /Firewall[123 swift storage ipv6]: [validate]", > "Info: Applying configuration version '1529673709'", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-192.168.24.14-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-192.168.24.14-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-10.0.0.110-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-10.0.0.110-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.1.11-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.1.11-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.1.17-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.1.17-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.3.15-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.3.15-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[order-ip-172.17.4.15-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_constraint[colo-ip-172.17.4.15-haproxy-bundle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-192.168.24.14]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-10.0.0.110]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.1.11]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.1.17]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.3.15]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_resource[ip-172.17.4.15]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-haproxy-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Haproxy::Stats/Haproxy::Listen[haproxy.stats]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy::Horizon_endpoint/Haproxy::Listen[horizon]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy::Horizon_endpoint/Haproxy::Balancermember[horizon_172.17.1.16_controller-0.internalapi.localdomain]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Listen[mysql]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Balancermember[mysql-backup]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/require: subscribes to Package[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/require: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/subscribe: subscribes to Package[iptables-services]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/before: subscribes to Service[iptables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]/before: subscribes to Service[ip6tables]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/before: subscribes to Class[Tripleo::Firewall::Post]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Listen[redis]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Haproxy::Balancermember[redis]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Haproxy/Exec[haproxy-reload]/subscribe: subscribes to Class[Haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]/subscribe: subscribes to Concat[/etc/haproxy/haproxy.cfg]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Property[haproxy-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[control_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[public_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[redis_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[storage_vip-then-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/before: subscribes to Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/notify: subscribes to Haproxy::Service[haproxy]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Install[haproxy]/before: subscribes to Haproxy::Config[haproxy]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Service[haproxy]/before: subscribes to Anchor[haproxy::haproxy::end]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::begin]/before: subscribes to Haproxy::Install[haproxy]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Haproxy::Listen[keystone_admin]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Haproxy::Balancermember[keystone_admin]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Haproxy::Listen[keystone_public]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Haproxy::Balancermember[keystone_public]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Haproxy::Listen[neutron]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Haproxy::Balancermember[neutron]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Haproxy::Listen[cinder]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Haproxy::Balancermember[cinder]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Haproxy::Listen[sahara]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Haproxy::Balancermember[sahara]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Haproxy::Listen[glance_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Haproxy::Balancermember[glance_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Haproxy::Listen[nova_osapi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Haproxy::Balancermember[nova_osapi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Haproxy::Listen[nova_placement]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Haproxy::Balancermember[nova_placement]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Haproxy::Listen[nova_metadata]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Haproxy::Balancermember[nova_metadata]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Haproxy::Listen[nova_novncproxy]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Haproxy::Balancermember[nova_novncproxy]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Haproxy::Listen[aodh]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Haproxy::Balancermember[aodh]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Haproxy::Listen[panko]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Haproxy::Balancermember[panko]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Haproxy::Listen[gnocchi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Haproxy::Balancermember[gnocchi]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Haproxy::Listen[swift_proxy_server]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Haproxy::Balancermember[swift_proxy_server]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Haproxy::Listen[heat_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Haproxy::Balancermember[heat_api]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Haproxy::Listen[heat_cfn]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Haproxy::Balancermember[heat_cfn]/notify: subscribes to Exec[haproxy-reload]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]/File[/var/lib/tripleo/pacemaker-restarts]/before: subscribes to Exec[haproxy-clone resource restart flag]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Resource::Ip[control_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Order[control_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[control_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Resource::Ip[public_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Order[public_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[public_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Resource::Ip[redis_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Order[redis_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Resource::Ip[internal_api_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Resource::Ip[storage_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Order[storage_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Resource::Ip[storage_mgmt_vip]/before: subscribes to Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]/before: subscribes to Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_v6_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]", > "Debug: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/before: subscribes to Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/Concat_file[/etc/haproxy/haproxy.cfg]/before: subscribes to File[/etc/haproxy/haproxy.cfg]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[000 accept related established rules ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[000 accept related established rules ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[001 accept all icmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[001 accept all icmp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[002 accept all to lo interface ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[003 accept ssh ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[003 accept ssh ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[004 accept ipv6 dhcpv6 ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[998 log all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[998 log all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[998 log all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[998 log all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[999 drop all ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[999 drop all ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 mysql_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]/File[/var/lib/tripleo/pacemaker-restarts]: Adding autorequire relationship with File[/var/lib/tripleo]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[128 aodh-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[128 aodh-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 ceph_mgr ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 ceph_mgr ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[110 ceph_mon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[110 ceph_mon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[119 cinder ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[119 cinder ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[120 iscsi initiator ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[120 iscsi initiator ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[112 glance_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[112 glance_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[129 gnocchi-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[129 gnocchi-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 gnocchi-statsd ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[107 haproxy stats ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[107 haproxy stats ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_cfn ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[125 heat_cfn ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[127 horizon ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[127 horizon ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[111 keystone ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[111 keystone ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[121 memcached ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[104 mysql galera-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[114 neutron api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[114 neutron api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[115 neutron dhcp input ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[116 neutron dhcp output ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[106 neutron_l3 vrrp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[118 neutron vxlan networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[136 neutron gre networks ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[136 neutron gre networks ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 nova_api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[113 nova_api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[138 nova_placement ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[138 nova_placement ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[137 nova_vnc_proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[105 ntp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[105 ntp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[130 pacemaker tcp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[131 pacemaker udp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[131 pacemaker udp ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 panko-api ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[140 panko-api ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[109 rabbitmq-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[108 redis-bundle ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[108 redis-bundle ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[132 sahara ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[132 sahara ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[124 snmp ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[124 snmp ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[124 snmp ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[124 snmp ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[124 snmp ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[124 snmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[124 snmp ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[122 swift proxy ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[122 swift proxy ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[123 swift storage ipv4]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Package[iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Package[iptables-services]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[firewalld]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autorequire relationship with Service[ip6tables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autobefore relationship with File[/etc/sysconfig/iptables]", > "Debug: /Firewall[123 swift storage ipv6]: Adding autobefore relationship with File[/etc/sysconfig/ip6tables]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/Concat_file[/etc/haproxy/haproxy.cfg]: Skipping automatic relationship with File[/etc/haproxy/haproxy.cfg]", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: Adding autorequire relationship with File[/etc/haproxy]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Haproxy_bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Pacemaker::Haproxy_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Profile::Base::Haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Class[Haproxy::Params]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Haproxy::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Instance[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Instance[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_evaluator]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_evaluator]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_listener]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_listener]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_notifier]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[aodh_notifier]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ca_certs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ca_certs]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_central]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_central]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_notification]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_notification]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mgr]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mgr]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_mon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[certmonger_user]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[certmonger_user]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_backup]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_volume]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[cinder_volume]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[clustercheck]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[clustercheck]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[docker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[docker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_registry_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[glance_registry_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_metricd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_metricd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_statsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[gnocchi_statsd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cloudwatch_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cloudwatch_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_api_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[heat_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[horizon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[iscsid]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[iscsid]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[kernel]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[kernel]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[keystone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[keystone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[memcached]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[memcached]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[mongodb_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[mongodb_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql_client]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[mysql_client]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_plugin_ml2]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_plugin_ml2]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_dhcp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_dhcp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_l3]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_l3]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_ovs_agent]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[neutron_ovs_agent]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_conductor]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_conductor]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_consoleauth]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_consoleauth]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_vnc_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_vnc_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ntp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ntp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[logrotate_crond]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[logrotate_crond]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[panko_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[panko_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_rpc]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_rpc]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_notify]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[oslo_messaging_notify]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[redis]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[sahara_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[snmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[snmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[sshd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[sshd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_ringbuilder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_ringbuilder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_storage]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[swift_storage]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[timezone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[timezone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_firewall]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_packages]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[tripleo_packages]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[tuned]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[tuned]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_client]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_client]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_compute]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceilometer_agent_compute]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_compute]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_compute]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_libvirt]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_libvirt]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_migration_target]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[nova_migration_target]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_osd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Service_endpoints[ceph_osd]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Haproxy::Stats]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Haproxy::Stats]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[haproxy.stats]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[haproxy.stats]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[panko]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Haproxy::Endpoint[heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Haproxy::Horizon_endpoint]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Haproxy::Horizon_endpoint]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[horizon]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[horizon_172.17.1.16_controller-0.internalapi.localdomain]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[horizon_172.17.1.16_controller-0.internalapi.localdomain]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[mysql]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[mysql]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[mysql-backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[mysql-backup]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Firewall]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Firewall::Pre]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Firewall::Pre]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall::Params]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall::Linux]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall::Linux]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux/Package[iptables]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux/Package[iptables]: Resource is being skipped, unscheduling all events", > "Debug: Class[Firewall::Linux::Redhat]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Firewall::Linux::Redhat]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[firewalld]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Package[iptables-services]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Exec[/usr/bin/systemctl daemon-reload]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[000 accept related established rules]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[000 accept related established rules]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[001 accept all icmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[001 accept all icmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[002 accept all to lo interface]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[002 accept all to lo interface]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[003 accept ssh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[003 accept ssh]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_evaluator]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_evaluator]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_listener]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_listener]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[aodh_notifier]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[aodh_notifier]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ca_certs]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ca_certs]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_central]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_central]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_notification]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceilometer_agent_notification]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceph_mgr]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceph_mgr]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ceph_mon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ceph_mon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[certmonger_user]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[certmonger_user]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_backup]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[cinder_volume]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[cinder_volume]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[clustercheck]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[clustercheck]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[docker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[docker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[glance_registry_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[glance_registry_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_metricd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_metricd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_statsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[gnocchi_statsd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cloudwatch_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cloudwatch_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_api_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[heat_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[heat_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[horizon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[iscsid]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[iscsid]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[kernel]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[kernel]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[keystone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[keystone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[memcached]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[memcached]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[mongodb_disabled]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[mongodb_disabled]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[mysql]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[mysql]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[mysql_client]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[mysql_client]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_plugin_ml2]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_plugin_ml2]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_dhcp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_dhcp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_l3]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_l3]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[neutron_ovs_agent]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[neutron_ovs_agent]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_conductor]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_conductor]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_consoleauth]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_consoleauth]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_scheduler]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_scheduler]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[nova_vnc_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[nova_vnc_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[ntp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[ntp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[logrotate_crond]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[logrotate_crond]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[panko_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[panko_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_rpc]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_rpc]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_notify]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[oslo_messaging_notify]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[redis]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[sahara_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[sahara_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[sahara_engine]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[sahara_engine]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[snmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[snmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[sshd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[sshd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[swift_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[swift_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[swift_ringbuilder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[swift_ringbuilder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[swift_storage]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[swift_storage]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[timezone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[timezone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[tripleo_firewall]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[tripleo_firewall]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[tripleo_packages]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[tripleo_packages]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Service_rules[tuned]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Service_rules[tuned]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 mysql_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 mysql_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[redis]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[redis]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 redis_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 redis_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[haproxy-role-controller-0]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Property[haproxy-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-wk6irw returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-wk6irw property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::begin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::begin]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Install[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Install[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Class[Haproxy::Globals]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Haproxy::Globals]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-haproxy.stats_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-haproxy.stats_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 keystone_admin_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 keystone_admin_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[neutron]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[cinder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[sahara]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_metadata_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_metadata_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[aodh]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[panko]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[panko]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 panko_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Listen[heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Listen[heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Balancermember[heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Balancermember[heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-horizon_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-horizon_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-horizon_balancermember_horizon_172.17.1.16_controller-0.internalapi.localdomain]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-horizon_balancermember_horizon_172.17.1.16_controller-0.internalapi.localdomain]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-mysql_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-mysql_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-mysql_balancermember_mysql-backup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-mysql_balancermember_mysql-backup]: Resource is being skipped, unscheduling all events", > "Debug: Prefetching iptables resources for firewall", > "Debug: Puppet::Type::Firewall::ProviderIptables: [prefetch(resources)]", > "Debug: Puppet::Type::Firewall::ProviderIptables: [instances]", > "Debug: Executing: '/usr/sbin/iptables-save'", > "Debug: Prefetching ip6tables resources for firewall", > "Debug: Puppet::Type::Firewall::ProviderIp6tables: [prefetch(resources)]", > "Debug: Puppet::Type::Firewall::ProviderIp6tables: [instances]", > "Debug: Executing: '/usr/sbin/ip6tables-save'", > "Debug: Class[Tripleo::Firewall::Post]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Class[Tripleo::Firewall::Post]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[998 log all]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[998 log all]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[999 drop all]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[999 drop all]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[128 aodh-api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[128 aodh-api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[113 ceph_mgr]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[113 ceph_mgr]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[110 ceph_mon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[110 ceph_mon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[119 cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[119 cinder]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[120 iscsi initiator]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[120 iscsi initiator]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[112 glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[112 glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[129 gnocchi-api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[129 gnocchi-api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[140 gnocchi-statsd]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[140 gnocchi-statsd]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[107 haproxy stats]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[107 haproxy stats]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[125 heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[125 heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[125 heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[125 heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[127 horizon]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[127 horizon]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[111 keystone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[111 keystone]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[121 memcached]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[121 memcached]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[104 mysql galera-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[104 mysql galera-bundle]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[114 neutron api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[114 neutron api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[115 neutron dhcp input]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[115 neutron dhcp input]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[116 neutron dhcp output]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[116 neutron dhcp output]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[106 neutron_l3 vrrp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[106 neutron_l3 vrrp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[118 neutron vxlan networks]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[118 neutron vxlan networks]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[136 neutron gre networks]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[136 neutron gre networks]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[113 nova_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[113 nova_api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[138 nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[138 nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[137 nova_vnc_proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[137 nova_vnc_proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[105 ntp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[105 ntp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[130 pacemaker tcp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[130 pacemaker tcp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[131 pacemaker udp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[131 pacemaker udp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[140 panko-api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[140 panko-api]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[109 rabbitmq-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[109 rabbitmq-bundle]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[108 redis-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[108 redis-bundle]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[132 sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[132 sahara]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[124 snmp]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[124 snmp]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[122 swift proxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[122 swift proxy]: Resource is being skipped, unscheduling all events", > "Debug: Tripleo::Firewall::Rule[123 swift storage]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Firewall::Rule[123 swift storage]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): Inserting rule 100 mysql_haproxy ipv4", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 3306 -m state --state NEW -j ACCEPT -m comment --comment 100 mysql_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 mysql_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: Executing: '/usr/libexec/iptables/iptables.init save'", > "Debug: /Firewall[100 mysql_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 mysql_haproxy] will propagate my refresh event", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): Inserting rule 100 mysql_haproxy ipv6", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 3306 -m state --state NEW -j ACCEPT -m comment --comment 100 mysql_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 mysql_haproxy]/Firewall[100 mysql_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 mysql_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: Executing: '/usr/libexec/iptables/ip6tables.init save'", > "Debug: /Firewall[100 mysql_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 mysql_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 mysql_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 mysql_haproxy]", > "Debug: Concat::Fragment[haproxy-redis_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-redis_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-redis_balancermember_redis]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-redis_balancermember_redis]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): Inserting rule 100 redis_haproxy ipv4", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 6379 -m state --state NEW -j ACCEPT -m comment --comment 100 redis_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 redis_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 redis_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 redis_haproxy] will propagate my refresh event", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): Inserting rule 100 redis_haproxy ipv6", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 6379 -m state --state NEW -j ACCEPT -m comment --comment 100 redis_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Firewall::Rule[100 redis_haproxy]/Firewall[100 redis_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 redis_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 redis_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 redis_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 redis_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 redis_haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-8x433q returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-8x433q property show | grep haproxy-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep haproxy-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-i1sbi5 returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-i1sbi5 property set --node controller-0 haproxy-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-i1sbi5 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-i1sbi5.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 haproxy-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Property[haproxy-role-controller-0]/Pcmk_property[property-controller-0-haproxy-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Property[haproxy-role-controller-0]/Pcmk_property[property-controller-0-haproxy-role]: The container Pacemaker::Property[haproxy-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[haproxy-role-controller-0]: Unscheduling all events on Pacemaker::Property[haproxy-role-controller-0]", > "Debug: Pacemaker::Resource::Ip[control_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[control_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[public_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[public_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[redis_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[redis_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[internal_api_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[internal_api_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[storage_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[storage_vip]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Resource::Ip[storage_mgmt_vip]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Ip[storage_mgmt_vip]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Install[haproxy]/Package[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Install[haproxy]/Package[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Haproxy::Config[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Config[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Concat[/etc/haproxy/haproxy.cfg]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat[/etc/haproxy/haproxy.cfg]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-00-header]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-00-header]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-haproxy-base]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-haproxy-base]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-keystone_admin_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_admin_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-keystone_admin_balancermember_keystone_admin]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_admin_balancermember_keystone_admin]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): Inserting rule 100 keystone_admin_haproxy ipv4", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 35357 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_admin_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 keystone_admin_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 keystone_admin_haproxy] will propagate my refresh event", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): Inserting rule 100 keystone_admin_haproxy ipv6", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 35357 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_admin_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_admin]/Tripleo::Firewall::Rule[100 keystone_admin_haproxy]/Firewall[100 keystone_admin_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 keystone_admin_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 keystone_admin_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 keystone_admin_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 keystone_admin_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 keystone_admin_haproxy]", > "Debug: Concat::Fragment[haproxy-keystone_public_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_public_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-keystone_public_balancermember_keystone_public]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-keystone_public_balancermember_keystone_public]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): Inserting rule 100 keystone_public_haproxy ipv4", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 5000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy] will propagate my refresh event", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): Inserting rule 100 keystone_public_haproxy ipv6", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 5000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy]/Firewall[100 keystone_public_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 keystone_public_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 keystone_public_haproxy]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 keystone_public_haproxy_ssl ipv4", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 13000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 keystone_public_haproxy_ssl ipv6", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 13000 -m state --state NEW -j ACCEPT -m comment --comment 100 keystone_public_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[keystone_public]/Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]/Firewall[100 keystone_public_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 keystone_public_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 keystone_public_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 keystone_public_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-neutron_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-neutron_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-neutron_balancermember_neutron]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-neutron_balancermember_neutron]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): Inserting rule 100 neutron_haproxy ipv4", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 9696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 neutron_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 neutron_haproxy] will propagate my refresh event", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): Inserting rule 100 neutron_haproxy ipv6", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 10 --wait -t filter -p tcp -m multiport --dports 9696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy]/Firewall[100 neutron_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 neutron_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 neutron_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 neutron_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 neutron_haproxy]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 neutron_haproxy_ssl ipv4", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 10 --wait -t filter -p tcp -m multiport --dports 13696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 neutron_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 neutron_haproxy_ssl ipv6", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 13696 -m state --state NEW -j ACCEPT -m comment --comment 100 neutron_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[neutron]/Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]/Firewall[100 neutron_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 neutron_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 neutron_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 neutron_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 neutron_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-cinder_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-cinder_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-cinder_balancermember_cinder]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-cinder_balancermember_cinder]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): Inserting rule 100 cinder_haproxy ipv4", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 8776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 cinder_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 cinder_haproxy] will propagate my refresh event", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): Inserting rule 100 cinder_haproxy ipv6", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 8776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy]/Firewall[100 cinder_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 cinder_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 cinder_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 cinder_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 cinder_haproxy]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 cinder_haproxy_ssl ipv4", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 13776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 cinder_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 cinder_haproxy_ssl ipv6", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 13776 -m state --state NEW -j ACCEPT -m comment --comment 100 cinder_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[cinder]/Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]/Firewall[100 cinder_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 cinder_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 cinder_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 cinder_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 cinder_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-sahara_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-sahara_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-sahara_balancermember_sahara]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-sahara_balancermember_sahara]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): Inserting rule 100 sahara_haproxy ipv4", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 8386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 sahara_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 sahara_haproxy] will propagate my refresh event", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): Inserting rule 100 sahara_haproxy ipv6", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy]/Firewall[100 sahara_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 sahara_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 sahara_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 sahara_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 sahara_haproxy]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 sahara_haproxy_ssl ipv4", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 13386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 sahara_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 sahara_haproxy_ssl ipv6", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 13386 -m state --state NEW -j ACCEPT -m comment --comment 100 sahara_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[sahara]/Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]/Firewall[100 sahara_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 sahara_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 sahara_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 sahara_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 sahara_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-glance_api_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-glance_api_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-glance_api_balancermember_glance_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-glance_api_balancermember_glance_api]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): Inserting rule 100 glance_api_haproxy ipv4", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 9292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 glance_api_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy] will propagate my refresh event", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): Inserting rule 100 glance_api_haproxy ipv6", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 9292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy]/Firewall[100 glance_api_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 glance_api_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 glance_api_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 glance_api_haproxy]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 glance_api_haproxy_ssl ipv4", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 8 --wait -t filter -p tcp -m multiport --dports 13292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 glance_api_haproxy_ssl ipv6", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 9 --wait -t filter -p tcp -m multiport --dports 13292 -m state --state NEW -j ACCEPT -m comment --comment 100 glance_api_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[glance_api]/Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]/Firewall[100 glance_api_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 glance_api_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 glance_api_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 glance_api_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-nova_osapi_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_osapi_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_osapi_balancermember_nova_osapi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_osapi_balancermember_nova_osapi]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): Inserting rule 100 nova_osapi_haproxy ipv4", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_osapi_haproxy ipv6", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 8774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy]/Firewall[100 nova_osapi_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_osapi_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_osapi_haproxy]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 nova_osapi_haproxy_ssl ipv4", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 13774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 nova_osapi_haproxy_ssl ipv6", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 13774 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_osapi_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_osapi]/Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]/Firewall[100 nova_osapi_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_osapi_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_osapi_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_osapi_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-nova_placement_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_placement_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_placement_balancermember_nova_placement]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_placement_balancermember_nova_placement]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): Inserting rule 100 nova_placement_haproxy ipv4", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 8778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_placement_haproxy ipv6", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 18 --wait -t filter -p tcp -m multiport --dports 8778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy]/Firewall[100 nova_placement_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_placement_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_placement_haproxy]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 nova_placement_haproxy_ssl ipv4", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 18 --wait -t filter -p tcp -m multiport --dports 13778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 nova_placement_haproxy_ssl ipv6", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 19 --wait -t filter -p tcp -m multiport --dports 13778 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_placement_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_placement]/Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]/Firewall[100 nova_placement_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_placement_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_placement_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_placement_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-nova_metadata_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_metadata_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_metadata_balancermember_nova_metadata]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_metadata_balancermember_nova_metadata]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): Inserting rule 100 nova_metadata_haproxy ipv4", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8775 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_metadata_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_metadata_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_metadata_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_metadata_haproxy ipv6", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 8775 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_metadata_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_metadata]/Tripleo::Firewall::Rule[100 nova_metadata_haproxy]/Firewall[100 nova_metadata_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_metadata_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_metadata_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_metadata_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_metadata_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_metadata_haproxy]", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_balancermember_nova_novncproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-nova_novncproxy_balancermember_nova_novncproxy]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): Inserting rule 100 nova_novncproxy_haproxy ipv4", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 6080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy] will propagate my refresh event", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): Inserting rule 100 nova_novncproxy_haproxy ipv6", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 6080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]/Firewall[100 nova_novncproxy_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 nova_novncproxy_haproxy_ssl ipv4", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 13080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 nova_novncproxy_haproxy_ssl ipv6", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 18 --wait -t filter -p tcp -m multiport --dports 13080 -m state --state NEW -j ACCEPT -m comment --comment 100 nova_novncproxy_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[nova_novncproxy]/Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]/Firewall[100 nova_novncproxy_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 nova_novncproxy_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 nova_novncproxy_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 nova_novncproxy_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-aodh_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-aodh_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-aodh_balancermember_aodh]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-aodh_balancermember_aodh]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): Inserting rule 100 aodh_haproxy ipv4", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 5 --wait -t filter -p tcp -m multiport --dports 8042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 aodh_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 aodh_haproxy] will propagate my refresh event", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): Inserting rule 100 aodh_haproxy ipv6", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 8042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy]/Firewall[100 aodh_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 aodh_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 aodh_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 aodh_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 aodh_haproxy]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 aodh_haproxy_ssl ipv4", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 6 --wait -t filter -p tcp -m multiport --dports 13042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 aodh_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 aodh_haproxy_ssl ipv6", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 7 --wait -t filter -p tcp -m multiport --dports 13042 -m state --state NEW -j ACCEPT -m comment --comment 100 aodh_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[aodh]/Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]/Firewall[100 aodh_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 aodh_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 aodh_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 aodh_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 aodh_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-panko_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-panko_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-panko_balancermember_panko]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-panko_balancermember_panko]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): Inserting rule 100 panko_haproxy ipv4", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 24 --wait -t filter -p tcp -m multiport --dports 8977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 panko_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 panko_haproxy] will propagate my refresh event", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): Inserting rule 100 panko_haproxy ipv6", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 8977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy]/Firewall[100 panko_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 panko_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 panko_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 panko_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 panko_haproxy]", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 panko_haproxy_ssl ipv4", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 25 --wait -t filter -p tcp -m multiport --dports 13977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 panko_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 panko_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 panko_haproxy_ssl ipv6", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 26 --wait -t filter -p tcp -m multiport --dports 13977 -m state --state NEW -j ACCEPT -m comment --comment 100 panko_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[panko]/Tripleo::Firewall::Rule[100 panko_haproxy_ssl]/Firewall[100 panko_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 panko_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 panko_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 panko_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 panko_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 panko_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-gnocchi_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-gnocchi_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-gnocchi_balancermember_gnocchi]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-gnocchi_balancermember_gnocchi]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): Inserting rule 100 gnocchi_haproxy ipv4", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 11 --wait -t filter -p tcp -m multiport --dports 8041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy] will propagate my refresh event", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): Inserting rule 100 gnocchi_haproxy ipv6", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 8041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy]/Firewall[100 gnocchi_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 gnocchi_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 gnocchi_haproxy]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 gnocchi_haproxy_ssl ipv4", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 12 --wait -t filter -p tcp -m multiport --dports 13041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 gnocchi_haproxy_ssl ipv6", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 13041 -m state --state NEW -j ACCEPT -m comment --comment 100 gnocchi_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[gnocchi]/Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]/Firewall[100 gnocchi_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 gnocchi_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 gnocchi_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 gnocchi_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_balancermember_swift_proxy_server]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-swift_proxy_server_balancermember_swift_proxy_server]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): Inserting rule 100 swift_proxy_server_haproxy ipv4", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 31 --wait -t filter -p tcp -m multiport --dports 8080 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy] will propagate my refresh event", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): Inserting rule 100 swift_proxy_server_haproxy ipv6", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 32 --wait -t filter -p tcp -m multiport --dports 8080 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]/Firewall[100 swift_proxy_server_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 swift_proxy_server_haproxy_ssl ipv4", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 32 --wait -t filter -p tcp -m multiport --dports 13808 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 swift_proxy_server_haproxy_ssl ipv6", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 33 --wait -t filter -p tcp -m multiport --dports 13808 -m state --state NEW -j ACCEPT -m comment --comment 100 swift_proxy_server_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[swift_proxy_server]/Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]/Firewall[100 swift_proxy_server_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 swift_proxy_server_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 swift_proxy_server_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 swift_proxy_server_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-heat_api_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_api_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-heat_api_balancermember_heat_api]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_api_balancermember_heat_api]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): Inserting rule 100 heat_api_haproxy ipv4", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 13 --wait -t filter -p tcp -m multiport --dports 8004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_api_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy] will propagate my refresh event", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): Inserting rule 100 heat_api_haproxy ipv6", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 8004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy]/Firewall[100 heat_api_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_api_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_api_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_api_haproxy]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 heat_api_haproxy_ssl ipv4", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 14 --wait -t filter -p tcp -m multiport --dports 13004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 heat_api_haproxy_ssl ipv6", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 13004 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_api_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_api]/Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]/Firewall[100 heat_api_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_api_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_api_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_api_haproxy_ssl]", > "Debug: Concat::Fragment[haproxy-heat_cfn_listen_block]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_cfn_listen_block]: Resource is being skipped, unscheduling all events", > "Debug: Concat::Fragment[haproxy-heat_cfn_balancermember_heat_cfn]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Concat::Fragment[haproxy-heat_cfn_balancermember_heat_cfn]: Resource is being skipped, unscheduling all events", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): Inserting rule 100 heat_cfn_haproxy ipv4", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 15 --wait -t filter -p tcp -m multiport --dports 8000 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv4]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv4]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy] will propagate my refresh event", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): Inserting rule 100 heat_cfn_haproxy ipv6", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 8000 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy]/Firewall[100 heat_cfn_haproxy ipv6]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy ipv6]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_cfn_haproxy]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_cfn_haproxy]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): Inserting rule 100 heat_cfn_haproxy_ssl ipv4", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/iptables -I INPUT 16 --wait -t filter -p tcp -m multiport --dports 13005 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy_ssl ipv4'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv4]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv4](provider=iptables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv4]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl] will propagate my refresh event", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): Inserting rule 100 heat_cfn_haproxy_ssl ipv6", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): [insert_order]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): Current resource: Puppet::Type::Firewall", > "Debug: Executing: '/usr/sbin/ip6tables -I INPUT 17 --wait -t filter -p tcp -m multiport --dports 13005 -m state --state NEW -j ACCEPT -m comment --comment 100 heat_cfn_haproxy_ssl ipv6'", > "Notice: /Stage[main]/Tripleo::Haproxy/Tripleo::Haproxy::Endpoint[heat_cfn]/Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]/Firewall[100 heat_cfn_haproxy_ssl ipv6]/ensure: created", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): [flush]", > "Debug: Firewall[100 heat_cfn_haproxy_ssl ipv6](provider=ip6tables): [persist_iptables]", > "Debug: /Firewall[100 heat_cfn_haproxy_ssl ipv6]: The container Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl] will propagate my refresh event", > "Info: Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]: Unscheduling all events on Tripleo::Firewall::Rule[100 heat_cfn_haproxy_ssl]", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v4_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v4_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v6_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v6_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v4_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_ironic_inspector_pxe_filter_v6_rules_cleanup]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-9bhz4h returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-9bhz4h constraint list | grep location-ip-192.168.24.14 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-xkzwjn returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-xkzwjn resource show ip-192.168.24.14 > /dev/null 2>&1", > "Debug: Exists: resource ip-192.168.24.14 exists 1 location exists 1 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-120fkcr returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-120fkcr resource create ip-192.168.24.14 IPaddr2 ip=192.168.24.14 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-120fkcr diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-120fkcr.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-192.168.24.14 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-192.168.24.14 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-mpdtvp returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-mpdtvp constraint location ip-192.168.24.14 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-mpdtvp diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-mpdtvp.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1m3g3va returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1m3g3va resource enable ip-192.168.24.14", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1m3g3va diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1m3g3va.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Resource::Ip[control_vip]/Pcmk_resource[ip-192.168.24.14]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Resource::Ip[control_vip]/Pcmk_resource[ip-192.168.24.14]: The container Pacemaker::Resource::Ip[control_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[control_vip]: Unscheduling all events on Pacemaker::Resource::Ip[control_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1k09kq4 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1k09kq4 constraint list | grep location-ip-10.0.0.110 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1tyxbu2 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1tyxbu2 resource show ip-10.0.0.110 > /dev/null 2>&1", > "Debug: Exists: resource ip-10.0.0.110 exists 1 location exists 1 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1k11bnq returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1k11bnq resource create ip-10.0.0.110 IPaddr2 ip=10.0.0.110 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1k11bnq diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1k11bnq.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-10.0.0.110 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-10.0.0.110 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-ylg4ex returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-ylg4ex constraint location ip-10.0.0.110 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-ylg4ex diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-ylg4ex.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1i105av returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1i105av resource enable ip-10.0.0.110", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1i105av diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1i105av.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Resource::Ip[public_vip]/Pcmk_resource[ip-10.0.0.110]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Resource::Ip[public_vip]/Pcmk_resource[ip-10.0.0.110]: The container Pacemaker::Resource::Ip[public_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[public_vip]: Unscheduling all events on Pacemaker::Resource::Ip[public_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-bal9ac returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-bal9ac constraint list | grep location-ip-172.17.1.11 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-2hzuo2 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-2hzuo2 resource show ip-172.17.1.11 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.1.11 exists 1 location exists 1 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-n7vhws returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-n7vhws resource create ip-172.17.1.11 IPaddr2 ip=172.17.1.11 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-n7vhws diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-n7vhws.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.1.11 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.1.11 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-wq4oja returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-wq4oja constraint location ip-172.17.1.11 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-wq4oja diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-wq4oja.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-12mltz8 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-12mltz8 resource enable ip-172.17.1.11", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-12mltz8 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-12mltz8.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Resource::Ip[redis_vip]/Pcmk_resource[ip-172.17.1.11]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Resource::Ip[redis_vip]/Pcmk_resource[ip-172.17.1.11]: The container Pacemaker::Resource::Ip[redis_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[redis_vip]: Unscheduling all events on Pacemaker::Resource::Ip[redis_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1knnbyy returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1knnbyy constraint list | grep location-ip-172.17.1.17 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1ek8z82 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1ek8z82 resource show ip-172.17.1.17 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.1.17 exists 1 location exists 1 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-l4g0t4 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-l4g0t4 resource create ip-172.17.1.17 IPaddr2 ip=172.17.1.17 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-l4g0t4 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-l4g0t4.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.1.17 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.1.17 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1sccvvj returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1sccvvj constraint location ip-172.17.1.17 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1sccvvj diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1sccvvj.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-qi5j59 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-qi5j59 resource enable ip-172.17.1.17", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-qi5j59 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-qi5j59.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Resource::Ip[internal_api_vip]/Pcmk_resource[ip-172.17.1.17]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Resource::Ip[internal_api_vip]/Pcmk_resource[ip-172.17.1.17]: The container Pacemaker::Resource::Ip[internal_api_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[internal_api_vip]: Unscheduling all events on Pacemaker::Resource::Ip[internal_api_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-14l27rk returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-14l27rk constraint list | grep location-ip-172.17.3.15 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1vjgqz7 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1vjgqz7 resource show ip-172.17.3.15 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.3.15 exists 1 location exists 1 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1g5z077 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1g5z077 resource create ip-172.17.3.15 IPaddr2 ip=172.17.3.15 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1g5z077 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1g5z077.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.3.15 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.3.15 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-hrrb2h returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-hrrb2h constraint location ip-172.17.3.15 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-hrrb2h diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-hrrb2h.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-xce82n returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-xce82n resource enable ip-172.17.3.15", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-xce82n diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-xce82n.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Resource::Ip[storage_vip]/Pcmk_resource[ip-172.17.3.15]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Resource::Ip[storage_vip]/Pcmk_resource[ip-172.17.3.15]: The container Pacemaker::Resource::Ip[storage_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[storage_vip]: Unscheduling all events on Pacemaker::Resource::Ip[storage_vip]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-bmzg5k returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-bmzg5k constraint list | grep location-ip-172.17.4.15 > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-votcit returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-votcit resource show ip-172.17.4.15 > /dev/null 2>&1", > "Debug: Exists: resource ip-172.17.4.15 exists 1 location exists 1 resource deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-ztquhy returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-ztquhy resource create ip-172.17.4.15 IPaddr2 ip=172.17.4.15 cidr_netmask=32 meta resource-stickiness=INFINITY --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-ztquhy diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-ztquhy.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location ip-172.17.4.15 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location ip-172.17.4.15 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1hrhq55 returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1hrhq55 constraint location ip-172.17.4.15 rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1hrhq55 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1hrhq55.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1o2ct2q returned ", > "Debug: try 1/10: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1o2ct2q resource enable ip-172.17.4.15", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1o2ct2q diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1o2ct2q.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Resource::Ip[storage_mgmt_vip]/Pcmk_resource[ip-172.17.4.15]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Resource::Ip[storage_mgmt_vip]/Pcmk_resource[ip-172.17.4.15]: The container Pacemaker::Resource::Ip[storage_mgmt_vip] will propagate my refresh event", > "Info: Pacemaker::Resource::Ip[storage_mgmt_vip]: Unscheduling all events on Pacemaker::Resource::Ip[storage_mgmt_vip]", > "Debug: Pacemaker::Resource::Bundle[haproxy-bundle]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Resource::Bundle[haproxy-bundle]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-gi355q returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-gi355q constraint list | grep location-haproxy-bundle > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1tp1ugz returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1tp1ugz resource show haproxy-bundle > /dev/null 2>&1", > "Debug: Exists: bundle haproxy-bundle exists 1 location exists 1 deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-x1uqio returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-x1uqio resource bundle create haproxy-bundle container docker image=192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest replicas=1 options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=haproxy-cfg-files source-dir=/var/lib/kolla/config_files/haproxy.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=haproxy-cfg-data source-dir=/var/lib/config-data/puppet-generated/haproxy/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=haproxy-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=haproxy-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=haproxy-var-lib source-dir=/var/lib/haproxy target-dir=/var/lib/haproxy options=rw storage-map id=haproxy-pki-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=haproxy-pki-ca-bundle-crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=haproxy-pki-ca-bundle-trust-crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=haproxy-pki-cert source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=haproxy-dev-log source-dir=/dev/log target-dir=/dev/log options=rw --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-x1uqio diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-x1uqio.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location haproxy-bundle rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: location_rule_create: constraint location haproxy-bundle rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-5txm3s returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-5txm3s constraint location haproxy-bundle rule resource-discovery=exclusive score=0 haproxy-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-5txm3s diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-5txm3s.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1pm41ev returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1pm41ev resource enable haproxy-bundle", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1pm41ev diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1pm41ev.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/Pcmk_bundle[haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Pacemaker::Resource::Bundle[haproxy-bundle]/Pcmk_bundle[haproxy-bundle]: The container Pacemaker::Resource::Bundle[haproxy-bundle] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[haproxy-bundle]: Unscheduling all events on Pacemaker::Resource::Bundle[haproxy-bundle]", > "Debug: Pacemaker::Constraint::Order[control_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[control_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[public_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[public_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[redis_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[redis_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[storage_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[storage_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1pokgms returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1pokgms constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-79f01e returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-79f01e constraint order start ip-192.168.24.14 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-79f01e diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-79f01e.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Order[control_vip-then-haproxy]/Pcmk_constraint[order-ip-192.168.24.14-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Order[control_vip-then-haproxy]/Pcmk_constraint[order-ip-192.168.24.14-haproxy-bundle]: The container Pacemaker::Constraint::Order[control_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[control_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[control_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[control_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[control_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-17h4xjv returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-17h4xjv constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1fsjcml returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1fsjcml constraint colocation add ip-192.168.24.14 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1fsjcml diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1fsjcml.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Colocation[control_vip-with-haproxy]/Pcmk_constraint[colo-ip-192.168.24.14-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_control_vip]/Pacemaker::Constraint::Colocation[control_vip-with-haproxy]/Pcmk_constraint[colo-ip-192.168.24.14-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[control_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[control_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[control_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1s7j5mb returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1s7j5mb constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1y5pdiy returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1y5pdiy constraint order start ip-10.0.0.110 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1y5pdiy diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1y5pdiy.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Order[public_vip-then-haproxy]/Pcmk_constraint[order-ip-10.0.0.110-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Order[public_vip-then-haproxy]/Pcmk_constraint[order-ip-10.0.0.110-haproxy-bundle]: The container Pacemaker::Constraint::Order[public_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[public_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[public_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[public_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[public_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-6s6l08 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-6s6l08 constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1s5gguc returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1s5gguc constraint colocation add ip-10.0.0.110 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1s5gguc diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1s5gguc.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Colocation[public_vip-with-haproxy]/Pcmk_constraint[colo-ip-10.0.0.110-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_public_vip]/Pacemaker::Constraint::Colocation[public_vip-with-haproxy]/Pcmk_constraint[colo-ip-10.0.0.110-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[public_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[public_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[public_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-w9joh5 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-w9joh5 constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-oj3ilx returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-oj3ilx constraint order start ip-172.17.1.11 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-oj3ilx diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-oj3ilx.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Order[redis_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.11-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Order[redis_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.11-haproxy-bundle]: The container Pacemaker::Constraint::Order[redis_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[redis_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[redis_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-16homx returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-16homx constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-7ed6ym returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-7ed6ym constraint colocation add ip-172.17.1.11 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-7ed6ym diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-7ed6ym.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.11-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_redis_vip]/Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.11-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[redis_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[redis_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-2fed8s returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-2fed8s constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-12oz5ni returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-12oz5ni constraint order start ip-172.17.1.17 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-12oz5ni diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-12oz5ni.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.17-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.1.17-haproxy-bundle]: The container Pacemaker::Constraint::Order[internal_api_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[internal_api_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1gr2ro7 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1gr2ro7 constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1pizr4e returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1pizr4e constraint colocation add ip-172.17.1.17 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1pizr4e diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1pizr4e.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.17-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_internal_api_vip]/Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.1.17-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[internal_api_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1w2xseg returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1w2xseg constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-14tzoaf returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-14tzoaf constraint order start ip-172.17.3.15 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-14tzoaf diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-14tzoaf.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Order[storage_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.3.15-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Order[storage_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.3.15-haproxy-bundle]: The container Pacemaker::Constraint::Order[storage_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[storage_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[storage_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1vmmkxi returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1vmmkxi constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1fd9p6a returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1fd9p6a constraint colocation add ip-172.17.3.15 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1fd9p6a diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1fd9p6a.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.3.15-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_vip]/Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.3.15-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[storage_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[storage_vip-with-haproxy]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1wepp76 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1wepp76 constraint order show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1ev8st8 returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1ev8st8 constraint order start ip-172.17.4.15 then start haproxy-bundle kind=Optional", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1ev8st8 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1ev8st8.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.4.15-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]/Pcmk_constraint[order-ip-172.17.4.15-haproxy-bundle]: The container Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]: Unscheduling all events on Pacemaker::Constraint::Order[storage_mgmt_vip-then-haproxy]", > "Debug: Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-msfxza returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-msfxza constraint colocation show --full", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-hsbv2t returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-hsbv2t constraint colocation add ip-172.17.4.15 with haproxy-bundle INFINITY", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-hsbv2t diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-hsbv2t.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.4.15-haproxy-bundle]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]/Pcmk_constraint[colo-ip-172.17.4.15-haproxy-bundle]: The container Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy] will propagate my refresh event", > "Info: Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]: Unscheduling all events on Pacemaker::Constraint::Colocation[storage_mgmt_vip-with-haproxy]", > "Info: Computing checksum on file /etc/haproxy/haproxy.cfg", > "Info: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: Filebucketed /etc/haproxy/haproxy.cfg to puppet with sum 1f337186b0e1ba5ee82760cb437fb810", > "Debug: Executing: '/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg20180622-8-15f7tl0 -c'", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: Configuration file is valid", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}3e602920be68dd9114246aadb54dcae7'", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/mode: mode changed '0644' to '0640'", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: The container Concat[/etc/haproxy/haproxy.cfg] will propagate my refresh event", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: The container /etc/haproxy/haproxy.cfg will propagate my refresh event", > "Debug: /etc/haproxy/haproxy.cfg: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /etc/haproxy/haproxy.cfg: Resource is being skipped, unscheduling all events", > "Info: /etc/haproxy/haproxy.cfg: Unscheduling all events on /etc/haproxy/haproxy.cfg", > "Info: Concat[/etc/haproxy/haproxy.cfg]: Unscheduling all events on Concat[/etc/haproxy/haproxy.cfg]", > "Debug: Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]/File[/var/lib/tripleo]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]/File[/var/lib/tripleo]: The container Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]/File[/var/lib/tripleo/pacemaker-restarts]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]/File[/var/lib/tripleo/pacemaker-restarts]: The container Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone] will propagate my refresh event", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]/Exec[haproxy-clone resource restart flag]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]/Exec[haproxy-clone resource restart flag]: Resource is being skipped, unscheduling all events", > "Info: Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]: Unscheduling all events on Tripleo::Pacemaker::Resource_restart_flag[haproxy-clone]", > "Debug: Haproxy::Service[haproxy]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Haproxy::Service[haproxy]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::end]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Anchor[haproxy::haproxy::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Haproxy/Exec[haproxy-reload]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Stage[main]/Tripleo::Profile::Base::Haproxy/Exec[haproxy-reload]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, tripleo::firewall::rule, pacemaker::resource::bundle, pacemaker::property, pacemaker::resource::ip, pacemaker::resource::ocf, pacemaker::constraint::order, pacemaker::constraint::colocation", > "Debug: Finishing transaction 32613240", > "Notice: Applied catalog in 151.26 seconds", > " Total: 92", > " Success: 92", > " Total: 254", > " Skipped: 37", > " Out of sync: 91", > " Changed: 91", > " Concat file: 0.00", > " Concat fragment: 0.00", > " File: 0.06", > " Last run: 1529673867", > " Total: 155.81", > " Firewall: 21.41", > " Pcmk constraint: 37.61", > " Pcmk property: 4.90", > " Config retrieval: 6.42", > " Pcmk resource: 75.57", > " Pcmk bundle: 9.84", > " Config: 1529673709", > "Debug: Finishing transaction 54759980", > "+ TAGS=file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", > "+ CONFIG='include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle'", > "+ puppet apply --debug --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation -e 'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle'", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp\", 65]:", > "Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications." > ] >} >2018-06-22 09:24:44,804 p=21516 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks2.json exists] ******** >2018-06-22 09:24:45,260 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:24:45,274 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:24:45,281 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:24:45,313 p=21516 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 2] ******************** >2018-06-22 09:24:45,380 p=21516 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:45,380 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:45,392 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:45,416 p=21516 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 2] *** >2018-06-22 09:24:45,447 p=21516 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,471 p=21516 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,484 p=21516 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,490 p=21516 u=mistral | PLAY [External deployment step 3] ********************************************** >2018-06-22 09:24:45,510 p=21516 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-06-22 09:24:45,529 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,545 p=21516 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-06-22 09:24:45,570 p=21516 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,575 p=21516 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,582 p=21516 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,600 p=21516 u=mistral | TASK [generate inventory] ****************************************************** >2018-06-22 09:24:45,620 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,640 p=21516 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-06-22 09:24:45,662 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,679 p=21516 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-06-22 09:24:45,698 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,715 p=21516 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-06-22 09:24:45,733 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,751 p=21516 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-06-22 09:24:45,771 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,789 p=21516 u=mistral | TASK [generate collect nodes uuid playbook] ************************************ >2018-06-22 09:24:45,807 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,824 p=21516 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-06-22 09:24:45,842 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,859 p=21516 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-06-22 09:24:45,877 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,893 p=21516 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-06-22 09:24:45,913 p=21516 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,934 p=21516 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-06-22 09:24:45,953 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:45,970 p=21516 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-06-22 09:24:45,989 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,007 p=21516 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-06-22 09:24:46,024 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,041 p=21516 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-06-22 09:24:46,059 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,077 p=21516 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-06-22 09:24:46,094 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,112 p=21516 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-06-22 09:24:46,130 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,150 p=21516 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-06-22 09:24:46,167 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,186 p=21516 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-06-22 09:24:46,205 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,210 p=21516 u=mistral | PLAY [Overcloud deploy step tasks for 3] *************************************** >2018-06-22 09:24:46,237 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:24:46,266 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,291 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,307 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,328 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:24:46,356 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,383 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,396 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,417 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:24:46,448 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,473 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,485 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,508 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:24:46,541 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,568 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,580 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,603 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:24:46,634 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,660 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,674 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,680 p=21516 u=mistral | PLAY [Overcloud common deploy step tasks 3] ************************************ >2018-06-22 09:24:46,706 p=21516 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-06-22 09:24:46,738 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,764 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,778 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,799 p=21516 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-06-22 09:24:46,831 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,856 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,869 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,892 p=21516 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-06-22 09:24:46,921 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,950 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,962 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:46,985 p=21516 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-06-22 09:24:47,015 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,043 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,054 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,077 p=21516 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-06-22 09:24:47,107 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,136 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,149 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,171 p=21516 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-06-22 09:24:47,203 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,230 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,242 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,264 p=21516 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-06-22 09:24:47,320 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': 'nova_api_discover_hosts.sh'}) => {"changed": false, "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,325 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': 'create_swift_secret.sh'}) => {"changed": false, "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,326 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': 'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,327 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,329 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': u'set_swift_keymaster_key_id.sh'}) => {"changed": false, "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,331 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': u'docker_puppet_apply.sh'}) => {"changed": false, "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,332 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": false, "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,362 p=21516 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-06-22 09:24:47,395 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,395 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,423 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,424 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,424 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,425 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,426 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,426 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,433 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,435 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,438 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,444 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,448 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,451 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,456 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,461 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,466 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,473 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,493 p=21516 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-06-22 09:24:47,524 p=21516 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,549 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,562 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:24:47,583 p=21516 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-06-22 09:24:47,613 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,638 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,651 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,672 p=21516 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-06-22 09:24:47,732 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,742 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_libvirt': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,749 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,755 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,759 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '53912472-747b-11e8-95a3-5254003d7dcb' --base64 'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '53912472-747b-11e8-95a3-5254003d7dcb' --base64 'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,767 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,894 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'memcached_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log'], 'user': u'root', 'volumes': [u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'detach': False, 'privileged': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=8omuhCCcfP1YuJzPZS8tLp3AL', u'DB_ROOT_PASSWORD=zeHIZe0ICg'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=n8jIt9appI3hU5NXoG3W'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "memcached_init_logs": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=8omuhCCcfP1YuJzPZS8tLp3AL", "DB_ROOT_PASSWORD=zeHIZe0ICg"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=n8jIt9appI3hU5NXoG3W"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,908 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'6CLNy5Ewot5UhcBYmt27oGDMD'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "6CLNy5Ewot5UhcBYmt27oGDMD"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:47,926 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,019 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,020 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,021 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,021 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,022 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,023 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,029 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_statsd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'gnocchi_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,053 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,066 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,138 p=21516 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-06-22 09:24:48,173 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,198 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,210 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,234 p=21516 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-06-22 09:24:48,314 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,315 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,316 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,322 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,324 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': '/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,328 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': '/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,333 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,339 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/var/lib/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/var/lib/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,343 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,427 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,430 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/keystone.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,436 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,439 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,444 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,450 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,454 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,459 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,464 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,470 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': '/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,475 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/haproxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,479 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,485 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,490 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,496 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,499 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/redis.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,504 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,510 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/glance_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,514 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,518 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,524 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,529 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,536 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,539 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': '/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,544 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,548 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,554 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,558 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,564 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': '/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,569 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/mysql.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,575 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,579 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,584 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,587 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,593 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,597 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,603 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,607 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,612 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,617 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,623 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,626 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,632 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,637 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,641 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,645 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,652 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,655 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,661 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,665 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,670 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,674 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': '/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,677 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,683 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,688 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,692 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/panko_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,696 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,701 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,705 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,710 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,714 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,719 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,724 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,729 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,737 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': u'/var/lib/kolla/config_files/horizon.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,777 p=21516 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-06-22 09:24:48,788 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:24:48,814 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:24:48,838 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:24:48,864 p=21516 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-06-22 09:24:48,920 p=21516 u=mistral | skipping: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4'}], 'key': 'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:48,958 p=21516 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-06-22 09:24:48,988 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:49,012 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:49,027 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:24:49,049 p=21516 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-06-22 09:24:49,768 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "62439dd24dde40c90e7a39f6a1b31cc6061fe59b", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "d1a4fc06e2525150450e67007bfcc8f3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673889.09-60385463732672/source", "state": "file", "uid": 0} >2018-06-22 09:24:49,774 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "62439dd24dde40c90e7a39f6a1b31cc6061fe59b", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "d1a4fc06e2525150450e67007bfcc8f3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673889.12-84879696171771/source", "state": "file", "uid": 0} >2018-06-22 09:24:49,807 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "62439dd24dde40c90e7a39f6a1b31cc6061fe59b", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "d1a4fc06e2525150450e67007bfcc8f3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529673889.15-144182579377988/source", "state": "file", "uid": 0} >2018-06-22 09:24:49,831 p=21516 u=mistral | TASK [Run puppet host configuration for step 3] ******************************** >2018-06-22 09:24:59,128 p=21516 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:24:59,379 p=21516 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:25:03,322 p=21516 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:25:03,346 p=21516 u=mistral | TASK [Debug output for task which failed: Run puppet host configuration for step 3] *** >2018-06-22 09:25:03,469 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.92 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seltype: seltype changed 'etc_t' to 'system_conf_t'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seltype: seltype changed 'etc_t' to 'system_conf_t'", > "Notice: Applied catalog in 3.44 seconds", > "Changes:", > " Total: 4", > "Events:", > " Success: 4", > "Resources:", > " Total: 217", > " Corrective change: 3", > " Out of sync: 4", > " Changed: 4", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " File line: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.00", > " Augeas: 0.02", > " Firewall: 0.02", > " File: 0.13", > " Service: 0.22", > " Package: 0.35", > " Pcmk property: 0.36", > " Pcmk resource default: 0.37", > " Exec: 0.88", > " Last run: 1529673902", > " Config retrieval: 3.46", > " Total: 5.82", > " Concat fragment: 0.00", > "Version:", > " Config: 1529673896", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:25:03,498 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.92 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.29 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 141", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.02", > " File: 0.10", > " Service: 0.13", > " Package: 0.23", > " Exec: 0.26", > " Last run: 1529673899", > " Config retrieval: 2.24", > " Total: 3.00", > " Concat fragment: 0.00", > " Filebucket: 0.00", > "Version:", > " Config: 1529673895", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:25:03,512 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.92 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.16 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 135", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.01", > " File: 0.09", > " Service: 0.11", > " Package: 0.23", > " Exec: 0.24", > " Last run: 1529673898", > " Config retrieval: 2.22", > " Total: 2.91", > " Concat fragment: 0.00", > "Version:", > " Config: 1529673895", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:25:03,536 p=21516 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 3] ***************** >2018-06-22 09:25:03,568 p=21516 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:25:03,595 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:25:03,609 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:25:03,634 p=21516 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 3] *** >2018-06-22 09:25:03,665 p=21516 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:25:03,691 p=21516 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:25:03,704 p=21516 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:25:03,728 p=21516 u=mistral | TASK [Start containers for step 3] ********************************************* >2018-06-22 09:25:04,497 p=21516 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:25:30,632 p=21516 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:26:17,462 p=21516 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:26:17,486 p=21516 u=mistral | TASK [Debug output for task which failed: Start containers for step 3] ********* >2018-06-22 09:26:17,586 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-notification ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-notification", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "333aa6b2b383: Already exists", > "61fdbbbd43a6: Pulling fs layer", > "61fdbbbd43a6: Verifying Checksum", > "61fdbbbd43a6: Download complete", > "61fdbbbd43a6: Pull complete", > "Digest: sha256:95db990608ca6e4c17f012e9517d9667fa79c8e102fdf5a2820de692b385e938", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-account ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-account", > "a98c7da29d65: Already exists", > "b85dac0937a4: Pulling fs layer", > "b85dac0937a4: Verifying Checksum", > "b85dac0937a4: Download complete", > "b85dac0937a4: Pull complete", > "Digest: sha256:8619e6534421b29808eaaad146ceac6399780459430f3c7fa490089377aa1380", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", > "stdout: ", > "stdout: e8482be6b4e13b2aac146a402303643465e190d089be3d39d57b1b41e0d41649", > "stdout: 2018-06-22 13:25:09.248 11 WARNING oslo_config.cfg [-] Option \"db_backend\" from group \"DEFAULT\" is deprecated. Use option \"backend\" from group \"database\".", > "2018-06-22 13:25:09.324 11 INFO migrate.versioning.api [-] 70 -> 71... ", > "2018-06-22 13:25:09.488 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.488 11 INFO migrate.versioning.api [-] 71 -> 72... ", > "2018-06-22 13:25:09.524 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.524 11 INFO migrate.versioning.api [-] 72 -> 73... ", > "2018-06-22 13:25:09.676 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.676 11 INFO migrate.versioning.api [-] 73 -> 74... ", > "2018-06-22 13:25:09.683 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.683 11 INFO migrate.versioning.api [-] 74 -> 75... ", > "2018-06-22 13:25:09.690 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.690 11 INFO migrate.versioning.api [-] 75 -> 76... ", > "2018-06-22 13:25:09.697 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.697 11 INFO migrate.versioning.api [-] 76 -> 77... ", > "2018-06-22 13:25:09.703 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.704 11 INFO migrate.versioning.api [-] 77 -> 78... ", > "2018-06-22 13:25:09.710 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.710 11 INFO migrate.versioning.api [-] 78 -> 79... ", > "2018-06-22 13:25:09.801 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.802 11 INFO migrate.versioning.api [-] 79 -> 80... ", > "2018-06-22 13:25:09.895 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.895 11 INFO migrate.versioning.api [-] 80 -> 81... ", > "2018-06-22 13:25:09.901 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.901 11 INFO migrate.versioning.api [-] 81 -> 82... ", > "2018-06-22 13:25:09.915 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.915 11 INFO migrate.versioning.api [-] 82 -> 83... ", > "2018-06-22 13:25:09.924 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.924 11 INFO migrate.versioning.api [-] 83 -> 84... ", > "2018-06-22 13:25:09.931 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.931 11 INFO migrate.versioning.api [-] 84 -> 85... ", > "2018-06-22 13:25:09.937 11 INFO migrate.versioning.api [-] done", > "2018-06-22 13:25:09.938 11 INFO migrate.versioning.api [-] 85 -> 86... ", > "2018-06-22 13:25:09.999 11 INFO migrate.versioning.api [-] done", > "stdout: \u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend\u001b[0m", > "\u001b[mNotice: Compiled catalog for controller-0.localdomain in environment production in 1.46 seconds\u001b[0m", > "\u001b[0;32mInfo: Applying configuration version '1529673915'\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]/Vs_bridge[br-ex]/external_ids: external_ids changed '' to 'bridge-id=br-ex'\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]/Vs_bridge[br-isolated]/external_ids: external_ids changed '' to 'bridge-id=br-isolated'\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]\u001b[0m", > "\u001b[0;32mInfo: Creating state file /var/lib/puppet/state/state.yaml\u001b[0m", > "\u001b[mNotice: Applied catalog in 0.35 seconds\u001b[0m", > "stderr: Running in chroot, ignoring request.", > "\u001b[1;33mWarning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found\u001b[0m", > "\u001b[1;33mWarning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)\u001b[0m", > "\u001b[1;33mWarning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "\u001b[1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 219]:[\"unknown\", 1]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')\u001b[0m", > "stderr: Option \"logdir\" from group \"DEFAULT\" is deprecated. Use option \"log-dir\" from group \"DEFAULT\".", > "stdout: Upgraded database to: queens_expand01, current revision(s): queens_expand01", > "Database migration is up to date. No migration needed.", > "Upgraded database to: queens_contract01, current revision(s): queens_contract01", > "Database is synced successfully.", > "stderr: + sudo -E kolla_set_configs", > "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Deleting /etc/glance/glance-api.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/glance/glance-api.conf to /etc/glance/glance-api.conf", > "INFO:__main__:Deleting /etc/glance/glance-cache.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/glance/glance-cache.conf to /etc/glance/glance-cache.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/tripleo.cnf to /etc/my.cnf.d/tripleo.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.conf to /etc/ceph/ceph.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.admin.keyring to /etc/ceph/ceph.client.admin.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.mon.keyring to /etc/ceph/ceph.mon.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.mgr.controller-0.keyring to /etc/ceph/ceph.mgr.controller-0.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.manila.keyring to /etc/ceph/ceph.client.manila.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.radosgw.keyring to /etc/ceph/ceph.client.radosgw.keyring", > "INFO:__main__:Writing out command to execute", > "INFO:__main__:Setting permission for /var/lib/glance", > "INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring", > "++ cat /run_command", > "+ CMD='/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf'", > "+ ARGS=", > "+ [[ ! -n '' ]]", > "+ . kolla_extend_start", > "++ [[ ! -d /var/log/kolla/glance ]]", > "++ mkdir -p /var/log/kolla/glance", > "+++ stat -c %a /var/log/kolla/glance", > "++ [[ 2755 != \\7\\5\\5 ]]", > "++ chmod 755 /var/log/kolla/glance", > "++ . /usr/local/bin/kolla_glance_extend_start", > "+++ [[ -n 0 ]]", > "+++ glance-manage db_sync", > "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1340: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade", > " expire_on_commit=expire_on_commit, _conf=conf)", > "INFO [alembic.runtime.migration] Context impl MySQLImpl.", > "INFO [alembic.runtime.migration] Will assume non-transactional DDL.", > "INFO [alembic.runtime.migration] Running upgrade -> liberty, liberty initial", > "INFO [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of 'images' table", > "INFO [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server", > "INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_expand01, add visibility to images", > "INFO [alembic.runtime.migration] Running upgrade ocata_expand01 -> pike_expand01, empty expand for symmetry with pike_contract01", > "INFO [alembic.runtime.migration] Running upgrade pike_expand01 -> queens_expand01", > "INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_contract01, remove is_public from images", > "INFO [alembic.runtime.migration] Running upgrade ocata_contract01 -> pike_contract01, drop glare artifacts tables", > "INFO [alembic.runtime.migration] Running upgrade pike_contract01 -> queens_contract01", > "+++ glance-manage db_load_metadefs", > "+++ exit 0", > "stdout: '/swift_ringbuilder/etc/swift/account.ring.gz' -> '/etc/swift/account.ring.gz'", > "'/swift_ringbuilder/etc/swift/container.ring.gz' -> '/etc/swift/container.ring.gz'", > "'/swift_ringbuilder/etc/swift/object.ring.gz' -> '/etc/swift/object.ring.gz'", > "'/swift_ringbuilder/etc/swift/account.builder' -> '/etc/swift/account.builder'", > "'/swift_ringbuilder/etc/swift/container.builder' -> '/etc/swift/container.builder'", > "'/swift_ringbuilder/etc/swift/object.builder' -> '/etc/swift/object.builder'", > "'/swift_ringbuilder/etc/swift/backups' -> '/etc/swift/backups'", > "'/swift_ringbuilder/etc/swift/backups/1529673029.account.builder' -> '/etc/swift/backups/1529673029.account.builder'", > "'/swift_ringbuilder/etc/swift/backups/1529673029.object.builder' -> '/etc/swift/backups/1529673029.object.builder'", > "'/swift_ringbuilder/etc/swift/backups/1529673030.container.builder' -> '/etc/swift/backups/1529673030.container.builder'", > "'/swift_ringbuilder/etc/swift/backups/1529673032.account.builder' -> '/etc/swift/backups/1529673032.account.builder'", > "'/swift_ringbuilder/etc/swift/backups/1529673032.account.ring.gz' -> '/etc/swift/backups/1529673032.account.ring.gz'", > "'/swift_ringbuilder/etc/swift/backups/1529673032.container.builder' -> '/etc/swift/backups/1529673032.container.builder'", > "'/swift_ringbuilder/etc/swift/backups/1529673032.container.ring.gz' -> '/etc/swift/backups/1529673032.container.ring.gz'", > "'/swift_ringbuilder/etc/swift/backups/1529673032.object.builder' -> '/etc/swift/backups/1529673032.object.builder'", > "'/swift_ringbuilder/etc/swift/backups/1529673032.object.ring.gz' -> '/etc/swift/backups/1529673032.object.ring.gz'", > "stderr: INFO [alembic.runtime.migration] Context impl MySQLImpl.", > "INFO [alembic.runtime.migration] Running upgrade -> 001, Icehouse release", > "INFO [alembic.runtime.migration] Running upgrade 001 -> 002, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 002 -> 003, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 003 -> 004, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 004 -> 005, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 005 -> 006, placeholder", > "INFO [alembic.runtime.migration] Running upgrade 006 -> 007, convert clusters.status_description to LongText", > "INFO [alembic.runtime.migration] Running upgrade 007 -> 008, add security_groups field to node groups", > "INFO [alembic.runtime.migration] Running upgrade 008 -> 009, add rollback info to cluster", > "INFO [alembic.runtime.migration] Running upgrade 009 -> 010, add auto_security_groups flag to node group", > "INFO [alembic.runtime.migration] Running upgrade 010 -> 011, add Sahara settings info to cluster", > "INFO [alembic.runtime.migration] Running upgrade 011 -> 012, add availability_zone field to node groups", > "INFO [alembic.runtime.migration] Running upgrade 012 -> 013, add volumes_availability_zone field to node groups", > "INFO [alembic.runtime.migration] Running upgrade 013 -> 014, add_volume_type", > "INFO [alembic.runtime.migration] Running upgrade 014 -> 015, add_events_objects", > "INFO [alembic.runtime.migration] Running upgrade 015 -> 016, Add is_proxy_gateway", > "INFO [alembic.runtime.migration] Running upgrade 016 -> 017, drop progress in JobExecution", > "INFO [alembic.runtime.migration] Running upgrade 017 -> 018, add volume_local_to_instance flag", > "INFO [alembic.runtime.migration] Running upgrade 018 -> 019, Add is_default field for cluster and node_group templates", > "INFO [alembic.runtime.migration] Running upgrade 019 -> 020, remove redandunt progress ops", > "INFO [alembic.runtime.migration] Running upgrade 020 -> 021, Add data_source_urls to job_executions to support placeholders", > "INFO [alembic.runtime.migration] Running upgrade 021 -> 022, add_job_interface", > "INFO [alembic.runtime.migration] Running upgrade 022 -> 023, add_use_autoconfig", > "INFO [alembic.runtime.migration] Running upgrade 023 -> 024, manila_shares", > "INFO [alembic.runtime.migration] Running upgrade 024 -> 025, Increase internal_ip and management_ip column size to work with IPv6", > "INFO [alembic.runtime.migration] Running upgrade 025 -> 026, add is_public and is_protected flags", > "INFO [alembic.runtime.migration] Running upgrade 026 -> 027, Rename oozie_job_id", > "INFO [alembic.runtime.migration] Running upgrade 027 -> 028, add_storage_devices_number", > "INFO [alembic.runtime.migration] Running upgrade 028 -> 029, set is_protected on is_default", > "INFO [alembic.runtime.migration] Running upgrade 029 -> 030, health-check", > "INFO [alembic.runtime.migration] Running upgrade 030 -> 031, added_plugins_table", > "INFO [alembic.runtime.migration] Running upgrade 031 -> 032, 032_add_domain_name", > "INFO [alembic.runtime.migration] Running upgrade 032 -> 033, 033_add anti_affinity_ratio field to cluster", > "stdout: de8245765762e222a5056a346461e30158da100516ab1d04a962f2f37388c579", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/10-keystone_wsgi_admin.conf to /etc/httpd/conf.d/10-keystone_wsgi_admin.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/10-keystone_wsgi_main.conf to /etc/httpd/conf.d/10-keystone_wsgi_main.conf", > "INFO:__main__:Deleting /etc/httpd/conf.d/ssl.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/ssl.conf to /etc/httpd/conf.d/ssl.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/access_compat.load to /etc/httpd/conf.modules.d/access_compat.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/actions.load to /etc/httpd/conf.modules.d/actions.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/alias.conf to /etc/httpd/conf.modules.d/alias.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/alias.load to /etc/httpd/conf.modules.d/alias.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/auth_basic.load to /etc/httpd/conf.modules.d/auth_basic.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/auth_digest.load to /etc/httpd/conf.modules.d/auth_digest.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_anon.load to /etc/httpd/conf.modules.d/authn_anon.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_core.load to /etc/httpd/conf.modules.d/authn_core.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_dbm.load to /etc/httpd/conf.modules.d/authn_dbm.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_file.load to /etc/httpd/conf.modules.d/authn_file.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_core.load to /etc/httpd/conf.modules.d/authz_core.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_dbm.load to /etc/httpd/conf.modules.d/authz_dbm.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_groupfile.load to /etc/httpd/conf.modules.d/authz_groupfile.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_host.load to /etc/httpd/conf.modules.d/authz_host.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_owner.load to /etc/httpd/conf.modules.d/authz_owner.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_user.load to /etc/httpd/conf.modules.d/authz_user.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/autoindex.conf to /etc/httpd/conf.modules.d/autoindex.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/autoindex.load to /etc/httpd/conf.modules.d/autoindex.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/cache.load to /etc/httpd/conf.modules.d/cache.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/cgi.load to /etc/httpd/conf.modules.d/cgi.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav.load to /etc/httpd/conf.modules.d/dav.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav_fs.conf to /etc/httpd/conf.modules.d/dav_fs.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav_fs.load to /etc/httpd/conf.modules.d/dav_fs.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/deflate.conf to /etc/httpd/conf.modules.d/deflate.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/deflate.load to /etc/httpd/conf.modules.d/deflate.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dir.conf to /etc/httpd/conf.modules.d/dir.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dir.load to /etc/httpd/conf.modules.d/dir.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/env.load to /etc/httpd/conf.modules.d/env.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/expires.load to /etc/httpd/conf.modules.d/expires.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/ext_filter.load to /etc/httpd/conf.modules.d/ext_filter.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/filter.load to /etc/httpd/conf.modules.d/filter.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/include.load to /etc/httpd/conf.modules.d/include.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/log_config.load to /etc/httpd/conf.modules.d/log_config.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/logio.load to /etc/httpd/conf.modules.d/logio.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime.conf to /etc/httpd/conf.modules.d/mime.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime.load to /etc/httpd/conf.modules.d/mime.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime_magic.conf to /etc/httpd/conf.modules.d/mime_magic.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime_magic.load to /etc/httpd/conf.modules.d/mime_magic.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/negotiation.conf to /etc/httpd/conf.modules.d/negotiation.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/negotiation.load to /etc/httpd/conf.modules.d/negotiation.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/prefork.conf to /etc/httpd/conf.modules.d/prefork.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/prefork.load to /etc/httpd/conf.modules.d/prefork.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/rewrite.load to /etc/httpd/conf.modules.d/rewrite.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/setenvif.conf to /etc/httpd/conf.modules.d/setenvif.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/setenvif.load to /etc/httpd/conf.modules.d/setenvif.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/socache_shmcb.load to /etc/httpd/conf.modules.d/socache_shmcb.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/speling.load to /etc/httpd/conf.modules.d/speling.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/ssl.load to /etc/httpd/conf.modules.d/ssl.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/status.conf to /etc/httpd/conf.modules.d/status.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/status.load to /etc/httpd/conf.modules.d/status.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/substitute.load to /etc/httpd/conf.modules.d/substitute.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/suexec.load to /etc/httpd/conf.modules.d/suexec.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/systemd.load to /etc/httpd/conf.modules.d/systemd.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/unixd.load to /etc/httpd/conf.modules.d/unixd.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/usertrack.load to /etc/httpd/conf.modules.d/usertrack.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/version.load to /etc/httpd/conf.modules.d/version.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/vhost_alias.load to /etc/httpd/conf.modules.d/vhost_alias.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/wsgi.conf to /etc/httpd/conf.modules.d/wsgi.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/wsgi.load to /etc/httpd/conf.modules.d/wsgi.load", > "INFO:__main__:Deleting /etc/httpd/conf/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf/httpd.conf to /etc/httpd/conf/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf/ports.conf to /etc/httpd/conf/ports.conf", > "INFO:__main__:Creating directory /etc/keystone/credential-keys", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/credential-keys/0 to /etc/keystone/credential-keys/0", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/credential-keys/1 to /etc/keystone/credential-keys/1", > "INFO:__main__:Creating directory /etc/keystone/fernet-keys", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/fernet-keys/0 to /etc/keystone/fernet-keys/0", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/fernet-keys/1 to /etc/keystone/fernet-keys/1", > "INFO:__main__:Deleting /etc/keystone/keystone.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/keystone/keystone.conf to /etc/keystone/keystone.conf", > "INFO:__main__:Creating directory /etc/systemd/system/httpd.service.d", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/systemd/system/httpd.service.d/httpd.conf to /etc/systemd/system/httpd.service.d/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/spool/cron/keystone to /var/spool/cron/keystone", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/www/cgi-bin/keystone/keystone-admin to /var/www/cgi-bin/keystone/keystone-admin", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/www/cgi-bin/keystone/keystone-public to /var/www/cgi-bin/keystone/keystone-public", > "+ CMD='/usr/sbin/httpd -DFOREGROUND'", > "++ [[ rhel =~ debian|ubuntu ]]", > "++ rm -rf /var/run/httpd/htcacheclean /run/httpd/htcacheclean '/tmp/httpd*'", > "++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone", > "++ [[ ! -d /var/log/kolla/keystone ]]", > "++ mkdir -p /var/log/kolla/keystone", > "+++ stat -c %U:%G /var/log/kolla/keystone", > "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]", > "++ chown keystone:kolla /var/log/kolla/keystone", > "++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'", > "++ touch /var/log/kolla/keystone/keystone.log", > "+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log", > "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]", > "++ chown keystone:keystone /var/log/kolla/keystone/keystone.log", > "+++ stat -c %a /var/log/kolla/keystone", > "++ chmod 755 /var/log/kolla/keystone", > "++ EXTRA_KEYSTONE_MANAGE_ARGS=", > "++ [[ -n '' ]]", > "++ [[ -n 0 ]]", > "++ sudo -H -u keystone keystone-manage db_sync", > "++ exit 0", > "stdout: d4b9d54eaa66eb3736982973c82ddc74c6cd8b5c57662f23de1854f754727a3f", > "stdout: Running upgrade for neutron ...", > "OK", > "Running upgrade for neutron-fwaas ...", > "Running upgrade for neutron-lbaas ...", > "Running upgrade for vmware-nsx ...", > "INFO [alembic.runtime.migration] Running upgrade -> kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225", > "INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151", > "INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf", > "INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee", > "INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f", > "INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773", > "INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592", > "INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7", > "INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79", > "INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051", > "INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136", > "INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59", > "INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d", > "INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a", > "INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25", > "INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee", > "INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9", > "INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4", > "INFO [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664", > "INFO [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5", > "INFO [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f", > "INFO [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821", > "INFO [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4", > "INFO [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81", > "INFO [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6", > "INFO [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532", > "INFO [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f", > "INFO [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a", > "INFO [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b", > "INFO [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73", > "INFO [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502", > "INFO [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee", > "INFO [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048", > "INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99", > "INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada", > "INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016", > "INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3", > "INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d", > "INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d", > "INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297", > "INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c", > "INFO [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39", > "INFO [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b", > "INFO [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050", > "INFO [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9", > "INFO [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada", > "INFO [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc", > "INFO [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53", > "INFO [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70", > "INFO [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37", > "INFO [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa", > "INFO [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf", > "INFO [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4", > "INFO [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e", > "INFO [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90", > "INFO [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4", > "INFO [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426", > "INFO [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524", > "INFO [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc", > "INFO [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d", > "INFO [alembic.runtime.migration] Running upgrade 5cd92597d11d -> 929c968efe70", > "INFO [alembic.runtime.migration] Running upgrade 929c968efe70 -> a9c43481023c", > "INFO [alembic.runtime.migration] Running upgrade a9c43481023c -> 804a3c76314c", > "INFO [alembic.runtime.migration] Running upgrade 804a3c76314c -> 2b42d90729da", > "INFO [alembic.runtime.migration] Running upgrade 2b42d90729da -> 62c781cb6192", > "INFO [alembic.runtime.migration] Running upgrade 62c781cb6192 -> c8c222d42aa9", > "INFO [alembic.runtime.migration] Running upgrade c8c222d42aa9 -> 349b6fd605a6", > "INFO [alembic.runtime.migration] Running upgrade 349b6fd605a6 -> 7d32f979895f", > "INFO [alembic.runtime.migration] Running upgrade 7d32f979895f -> 594422d373ee", > "INFO [alembic.runtime.migration] Running upgrade 594422d373ee -> 61663558142c", > "INFO [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a", > "INFO [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad", > "INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab", > "INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0", > "INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62", > "INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353", > "INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586", > "INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d", > "INFO [alembic.runtime.migration] Running upgrade -> start_neutron_fwaas, start neutron-fwaas chain", > "INFO [alembic.runtime.migration] Running upgrade start_neutron_fwaas -> 4202e3047e47, add_index_tenant_id", > "INFO [alembic.runtime.migration] Running upgrade 4202e3047e47 -> 540142f314f4, FWaaS router insertion", > "INFO [alembic.runtime.migration] Running upgrade 540142f314f4 -> 796c68dffbb, cisco_csr_fwaas", > "INFO [alembic.runtime.migration] Running upgrade 796c68dffbb -> kilo, kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> c40fbb377ad, Initial Liberty no-op script.", > "INFO [alembic.runtime.migration] Running upgrade c40fbb377ad -> 4b47ea298795, add reject rule", > "INFO [alembic.runtime.migration] Running upgrade 4b47ea298795 -> d6a12e637e28, neutron-fwaas v2.0", > "INFO [alembic.runtime.migration] Running upgrade d6a12e637e28 -> 876782258a43, create_default_firewall_groups_table", > "INFO [alembic.runtime.migration] Running upgrade 876782258a43 -> f24e0d5e5bff, uniq_firewallgroupportassociation0port", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 67c8e8d61d5, Initial Liberty no-op script.", > "INFO [alembic.runtime.migration] Running upgrade 67c8e8d61d5 -> 458aa42b14b, fw_table_alter script to make <name> column case sensitive", > "INFO [alembic.runtime.migration] Running upgrade 458aa42b14b -> f83a0b2964d0, rename tenant to project", > "INFO [alembic.runtime.migration] Running upgrade f83a0b2964d0 -> fd38cd995cc0, change shared attribute for firewall resource", > "INFO [alembic.runtime.migration] Running upgrade -> start_neutron_lbaas, start neutron-lbaas chain", > "INFO [alembic.runtime.migration] Running upgrade start_neutron_lbaas -> lbaasv2, lbaas version 2 api", > "INFO [alembic.runtime.migration] Running upgrade lbaasv2 -> 4deef6d81931, add provisioning and operating statuses", > "INFO [alembic.runtime.migration] Running upgrade 4deef6d81931 -> 4b6d8d5310b8, add_index_tenant_id", > "INFO [alembic.runtime.migration] Running upgrade 4b6d8d5310b8 -> 364f9b6064f0, agentv2", > "INFO [alembic.runtime.migration] Running upgrade 364f9b6064f0 -> lbaasv2_tls, lbaasv2 TLS", > "INFO [alembic.runtime.migration] Running upgrade lbaasv2_tls -> 4ba00375f715, edge_driver", > "INFO [alembic.runtime.migration] Running upgrade 4ba00375f715 -> kilo, kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 3345facd0452, Initial Liberty no-op expand script.", > "INFO [alembic.runtime.migration] Running upgrade 3345facd0452 -> 4a408dd491c2, Addition of Name column to lbaas_members and lbaas_healthmonitors table", > "INFO [alembic.runtime.migration] Running upgrade 4a408dd491c2 -> 3426acbc12de, Add flavor id", > "INFO [alembic.runtime.migration] Running upgrade 3426acbc12de -> 6aee0434f911, independent pools", > "INFO [alembic.runtime.migration] Running upgrade 6aee0434f911 -> 3543deab1547, add_l7_tables", > "INFO [alembic.runtime.migration] Running upgrade 3543deab1547 -> 62deca5010cd, Add tenant-id index for L7 tables", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 130ebfdef43, Initial Liberty no-op contract revision.", > "INFO [alembic.runtime.migration] Running upgrade 130ebfdef43 -> 4b4dc6d5d843, rename tenant to project", > "INFO [alembic.runtime.migration] Running upgrade 4b4dc6d5d843 -> e6417a8b114d, Drop v1 tables", > "INFO [alembic.runtime.migration] Running upgrade 62deca5010cd -> 844352f9fe6f, Add healthmonitor max retries down", > "INFO [alembic.runtime.migration] Running upgrade -> kilo, kilo", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 53a3254aa95e, Initial Liberty no-op expand script.", > "INFO [alembic.runtime.migration] Running upgrade 53a3254aa95e -> 28430956782d, nsxv3_security_groups", > "INFO [alembic.runtime.migration] Running upgrade 28430956782d -> 279b70ac3ae8, NSXv3 Add l2gwconnection table", > "INFO [alembic.runtime.migration] Running upgrade 279b70ac3ae8 -> 312211a5725f, nsxv_lbv2", > "INFO [alembic.runtime.migration] Running upgrade 312211a5725f -> 2af850eb3970, update nsxv tz binding type", > "INFO [alembic.runtime.migration] Running upgrade 2af850eb3970 -> 69fb78b33d41, NSXv add dns search domain to subnets", > "INFO [alembic.runtime.migration] Running upgrade 69fb78b33d41 -> 20483029f1ff, update nsx_v3 tz_network_bindings_binding_type", > "INFO [alembic.runtime.migration] Running upgrade 20483029f1ff -> 4c45bcadccf9, extend_secgroup_rule", > "INFO [alembic.runtime.migration] Running upgrade 4c45bcadccf9 -> 2c87aedb206f, nsxv_security_group_logging", > "INFO [alembic.runtime.migration] Running upgrade 2c87aedb206f -> 3e4dccfe6fb4, NSXv add dns search domain to subnets", > "INFO [alembic.runtime.migration] Running upgrade 3e4dccfe6fb4 -> 967462f585e1, add dvs_id column to neutron_nsx_network_mappings", > "INFO [alembic.runtime.migration] Running upgrade 967462f585e1 -> b7f41687cbad, nsxv3_qos_policy_mapping", > "INFO [alembic.runtime.migration] Running upgrade b7f41687cbad -> c288bb6a7252, NSXv add resource pool to the router bindings table", > "INFO [alembic.runtime.migration] Running upgrade c288bb6a7252 -> c644ec62c585, NSXv3 add nsx_service_bindings and nsx_dhcp_bindings tables", > "INFO [alembic.runtime.migration] Running upgrade c644ec62c585 -> 5e564e781d77, add nsx binding type", > "INFO [alembic.runtime.migration] Running upgrade 5e564e781d77 -> aede17d51d0f, add timestamp", > "INFO [alembic.runtime.migration] Running upgrade aede17d51d0f -> 7e46906f8997, lbaas foreignkeys", > "INFO [alembic.runtime.migration] Running upgrade 7e46906f8997 -> 86a55205337c, NSXv add availability zone to the router bindings table instead of", > "the resource pool column", > "INFO [alembic.runtime.migration] Running upgrade 86a55205337c -> 633514d94b93, Add support for TaaS", > "INFO [alembic.runtime.migration] Running upgrade 633514d94b93 -> 1b4eaffe4f31, NSX Adds a 'provider' attribute to security-group", > "INFO [alembic.runtime.migration] Running upgrade 1b4eaffe4f31 -> 6e6da8296c0e, Add support for IPAM in NSXv", > "INFO [alembic.runtime.migration] Running upgrade kilo -> 393bf843b96, Initial Liberty no-op contract script.", > "INFO [alembic.runtime.migration] Running upgrade 393bf843b96 -> 3c88bdea3054, nsxv_vdr_dhcp_binding.py", > "INFO [alembic.runtime.migration] Running upgrade 3c88bdea3054 -> 5ed1ffbc0d2a, nsxv_security_group_logging", > "INFO [alembic.runtime.migration] Running upgrade 5ed1ffbc0d2a -> 081af0e396d7, nsxv3_secgroup_local_ip_prefix", > "INFO [alembic.runtime.migration] Running upgrade 081af0e396d7 -> dbe29d208ac6, NSXv add DHCP MTU to subnets", > "INFO [alembic.runtime.migration] Running upgrade dbe29d208ac6 -> d49ac91b560e, Support shared pools with NSXv LBaaSv2 driver", > "INFO [alembic.runtime.migration] Running upgrade d49ac91b560e -> 5c8f451290b7, nsxv_subnet_ipam rename to nsx_subnet_ipam", > "INFO [alembic.runtime.migration] Running upgrade 5c8f451290b7 -> 14a89ddf96e2, NSX Adds a 'availability_zone' attribute to internal-networks table", > "INFO [alembic.runtime.migration] Running upgrade 14a89ddf96e2 -> 8c0a81a07691, Update the primary key constraint of nsx_subnet_ipam", > "INFO [alembic.runtime.migration] Running upgrade 8c0a81a07691 -> 84ceffa27115, remove the foreign key constrain from nsxv3_qos_policy_mapping", > "INFO [alembic.runtime.migration] Running upgrade 84ceffa27115 -> a1be06050b41, update nsx binding types", > "INFO [alembic.runtime.migration] Running upgrade a1be06050b41 -> 717f7f63a219, nsxv3_lbaas_l7policy", > "INFO [alembic.runtime.migration] Running upgrade 6e6da8296c0e -> 7b5ec3caa9a4, Fix the availability zones default value in the router bindings table", > "INFO [alembic.runtime.migration] Running upgrade 7b5ec3caa9a4 -> e816d4fe9d4f, NSX Adds a 'policy' attribute to security-group", > "INFO [alembic.runtime.migration] Running upgrade e816d4fe9d4f -> dd9fe5a3a526, NSX Adds certificate table for client certificate management", > "INFO [alembic.runtime.migration] Running upgrade dd9fe5a3a526 -> 01a33f93f5fd, nsxv_lbv2_l7policy", > "INFO [alembic.runtime.migration] Running upgrade 01a33f93f5fd -> e4c503f4133f, Port vnic_type support", > "INFO [alembic.runtime.migration] Running upgrade e4c503f4133f -> 7c4704ad37df, Fix NSX Lbaas L7 policy table creation", > "INFO [alembic.runtime.migration] Running upgrade 7c4704ad37df -> 8699700cd95c, nsxv_bgp_speaker_mapping", > "INFO [alembic.runtime.migration] Running upgrade 8699700cd95c -> 53eb497903a4, Drop VDR DHCP bindings table", > "INFO [alembic.runtime.migration] Running upgrade 53eb497903a4 -> ea7a72ab9643", > "INFO [alembic.runtime.migration] Running upgrade ea7a72ab9643 -> 9799427fc0e1, nsx map project to plugin", > "INFO [alembic.runtime.migration] Running upgrade 9799427fc0e1 -> 0dbeda408e41, nsxv3_vpn_mapping", > "stdout: 88bc619351fd0739f3c03766dd55dd070569a395b9c57d3a660dd6a00c163a7c", > "stdout: 5f323e6a1f24f7520be99392ed2c670254cb7136429bf22c0c68d12bbe8f4d60", > "stdout: fab9e5f94a927d35304a9a088d5cf6380767327129ab6364b382d72c1457ed3c", > "stdout: (cellv2) Creating default cell_v2 cell", > "stdout: ad04ae89a611f702fbc82f6a0da7d063ade447bcf314532a4cc4cc05d9d38775", > "stdout: fe2dbf2a953c4461adb45e7cb031c0415c14f839a5bc93076bc5125b3a12b573", > "stderr: /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')", > " result = self._query(query)", > "/usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')", > "stdout: d87910f7c8fb26666b1ef54017b55907e3b2da24500d1e11c8f843805c087923" > ] >} >2018-06-22 09:26:17,594 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-libvirt ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-libvirt", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c39bfe26f6c5: Pulling fs layer", > "c39bfe26f6c5: Download complete", > "c39bfe26f6c5: Pull complete", > "Digest: sha256:ddf9894ce80fe045252534284d3aa3e1d156aca8a8eeca908571558e3b54428f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", > "", > "stderr: ", > "stdout: \u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[0;32mInfo: Loading facts\u001b[0m", > "\u001b[mNotice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend\u001b[0m", > "\u001b[mNotice: Compiled catalog for compute-0.localdomain in environment production in 1.38 seconds\u001b[0m", > "\u001b[0;32mInfo: Applying configuration version '1529673928'\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]/Vs_bridge[br-ex]/ensure: created\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[datacentre:br-ex]\u001b[0m", > "\u001b[mNotice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]/Vs_bridge[br-isolated]/external_ids: external_ids changed '' to 'bridge-id=br-isolated'\u001b[0m", > "\u001b[0;32mInfo: Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]: Unscheduling all events on Neutron::Plugins::Ovs::Bridge[tenant:br-isolated]\u001b[0m", > "\u001b[0;32mInfo: Creating state file /var/lib/puppet/state/state.yaml\u001b[0m", > "\u001b[mNotice: Applied catalog in 0.20 seconds\u001b[0m", > "stderr: Running in chroot, ignoring request.", > "\u001b[1;33mWarning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found\u001b[0m", > "\u001b[1;33mWarning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)\u001b[0m", > "\u001b[1;33mWarning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "\u001b[1;33mWarning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 219]:[\"unknown\", 1]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')\u001b[0m", > "stdout: 5b8325565f9fbd5ff38c0d8afa951d1311035b6abe38da45a76fe91dde09622e", > "stdout: b9da6316e86b963e4998e40b6877e67e4689030cfc72be3630e7ef10f8816c1a", > "stdout: 5fc7eff0dff45f39b588e20e63df0e76d1f5e0de5cdde32f61eb895b88b4557f" > ] >} >2018-06-22 09:26:17,614 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-22 09:26:17,641 p=21516 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks3.json exists] ******** >2018-06-22 09:26:18,087 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"atime": 1529672914.1111186, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "730e4e048205e1fadc6cd518326d4622d77edad6", "ctime": 1529672914.1141186, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 52428970, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0600", "mtime": 1529672913.8801198, "nlink": 1, "path": "/var/lib/docker-puppet/docker-puppet-tasks3.json", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 397, "uid": 0, "version": "2079696438", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-06-22 09:26:18,091 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:26:18,100 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:26:18,124 p=21516 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 3] ******************** >2018-06-22 09:26:18,179 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:26:18,191 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:57,040 p=21516 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:57,067 p=21516 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 3] *** >2018-06-22 09:28:57,127 p=21516 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,128 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-06-22 13:26:18,558 INFO: 96116 -- Running docker-puppet", > "2018-06-22 13:26:18,558 INFO: 96116 -- Service compilation completed.", > "2018-06-22 13:26:18,559 INFO: 96116 -- Starting multiprocess configuration steps. Using 8 processes.", > "2018-06-22 13:26:18,573 INFO: 96117 -- Starting configuration of keystone_init_tasks using image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-22 13:26:18,575 INFO: 96117 -- Removing container: docker-puppet-keystone_init_tasks", > "2018-06-22 13:26:18,621 INFO: 96117 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", > "2018-06-22 13:28:56,887 INFO: 96117 -- Removing container: docker-puppet-keystone_init_tasks", > "2018-06-22 13:28:56,940 INFO: 96117 -- Finished processing puppet configs for keystone_init_tasks" > ] >} >2018-06-22 09:28:57,141 p=21516 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,148 p=21516 u=mistral | PLAY [External deployment step 4] ********************************************** >2018-06-22 09:28:57,171 p=21516 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-06-22 09:28:57,193 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,210 p=21516 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-06-22 09:28:57,242 p=21516 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,244 p=21516 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,246 p=21516 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,264 p=21516 u=mistral | TASK [generate inventory] ****************************************************** >2018-06-22 09:28:57,282 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,301 p=21516 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-06-22 09:28:57,321 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,339 p=21516 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-06-22 09:28:57,357 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,376 p=21516 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-06-22 09:28:57,394 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,412 p=21516 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-06-22 09:28:57,430 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,451 p=21516 u=mistral | TASK [generate collect nodes uuid playbook] ************************************ >2018-06-22 09:28:57,469 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,487 p=21516 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-06-22 09:28:57,507 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,525 p=21516 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-06-22 09:28:57,543 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,562 p=21516 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-06-22 09:28:57,582 p=21516 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,601 p=21516 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-06-22 09:28:57,619 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,637 p=21516 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-06-22 09:28:57,656 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,676 p=21516 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-06-22 09:28:57,695 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,713 p=21516 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-06-22 09:28:57,730 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,747 p=21516 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-06-22 09:28:57,769 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,790 p=21516 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-06-22 09:28:57,807 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,826 p=21516 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-06-22 09:28:57,844 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,862 p=21516 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-06-22 09:28:57,880 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,885 p=21516 u=mistral | PLAY [Overcloud deploy step tasks for 4] *************************************** >2018-06-22 09:28:57,911 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:28:57,941 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,966 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,978 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:57,999 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:28:58,033 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,065 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,077 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,099 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:28:58,129 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,154 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,166 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,188 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:28:58,216 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,241 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,253 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,275 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:28:58,303 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,330 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,350 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,359 p=21516 u=mistral | PLAY [Overcloud common deploy step tasks 4] ************************************ >2018-06-22 09:28:58,419 p=21516 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-06-22 09:28:58,449 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,473 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,486 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,508 p=21516 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-06-22 09:28:58,535 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,560 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,572 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,593 p=21516 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-06-22 09:28:58,619 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,642 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,652 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,675 p=21516 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-06-22 09:28:58,699 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,721 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,732 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,752 p=21516 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-06-22 09:28:58,777 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,799 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,812 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,832 p=21516 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-06-22 09:28:58,858 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,880 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,894 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,914 p=21516 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-06-22 09:28:58,971 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': 'nova_api_discover_hosts.sh'}) => {"changed": false, "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,977 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': 'create_swift_secret.sh'}) => {"changed": false, "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,978 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': 'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,979 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': 'set_swift_keymaster_key_id.sh'}) => {"changed": false, "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,982 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,983 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': 'docker_puppet_apply.sh'}) => {"changed": false, "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:58,986 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": false, "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,009 p=21516 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-06-22 09:28:59,059 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,059 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,060 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,063 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,064 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,064 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,065 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,066 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,066 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,068 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,071 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,076 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,079 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,087 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,088 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,094 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,097 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,102 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,125 p=21516 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-06-22 09:28:59,156 p=21516 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,181 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,192 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:28:59,214 p=21516 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-06-22 09:28:59,245 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,272 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,284 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,306 p=21516 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-06-22 09:28:59,363 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,370 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_libvirt': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,375 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,383 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'memcached_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log'], 'user': u'root', 'volumes': [u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'detach': False, 'privileged': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=8omuhCCcfP1YuJzPZS8tLp3AL', u'DB_ROOT_PASSWORD=zeHIZe0ICg'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=n8jIt9appI3hU5NXoG3W'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "memcached_init_logs": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=8omuhCCcfP1YuJzPZS8tLp3AL", "DB_ROOT_PASSWORD=zeHIZe0ICg"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=n8jIt9appI3hU5NXoG3W"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,401 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'6CLNy5Ewot5UhcBYmt27oGDMD'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "6CLNy5Ewot5UhcBYmt27oGDMD"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,425 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,432 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,443 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '53912472-747b-11e8-95a3-5254003d7dcb' --base64 'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '53912472-747b-11e8-95a3-5254003d7dcb' --base64 'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,446 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,453 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,454 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,454 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,455 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,456 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,457 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,463 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_statsd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'gnocchi_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,488 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,502 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,578 p=21516 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-06-22 09:28:59,610 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,635 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,646 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,669 p=21516 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-06-22 09:28:59,748 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,753 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,754 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,755 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,756 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': '/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,758 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': '/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,761 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,766 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/var/lib/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/var/lib/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,775 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,858 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,862 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/keystone.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,867 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,872 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,877 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,881 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,885 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,891 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,896 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,901 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': '/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,906 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/haproxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,909 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,912 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,918 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,923 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,927 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/redis.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,936 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,937 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/glance_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,942 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,945 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,948 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,953 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,958 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,960 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': '/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,966 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,969 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,973 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,977 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,981 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': '/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,985 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/mysql.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,991 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:28:59,994 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,000 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,004 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,010 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,014 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,017 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,023 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,027 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,033 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,037 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,047 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,048 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,051 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,057 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,060 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,065 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,069 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,073 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,078 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,082 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,086 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': '/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,092 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,098 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,102 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,107 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/panko_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,111 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,116 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,119 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,124 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,129 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,133 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,138 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,143 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,150 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': '/var/lib/kolla/config_files/horizon.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,192 p=21516 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-06-22 09:29:00,204 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:29:00,227 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:29:00,251 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:29:00,276 p=21516 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-06-22 09:29:00,329 p=21516 u=mistral | skipping: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4'}], 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,367 p=21516 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-06-22 09:29:00,396 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,423 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,440 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:00,464 p=21516 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-06-22 09:29:01,184 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "ee48fb03297eb703b1954c8852d0f67fab51dac1", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "e66511bcb9efc937174b88035d019e7b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529674140.5-80058101098388/source", "state": "file", "uid": 0} >2018-06-22 09:29:01,196 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "ee48fb03297eb703b1954c8852d0f67fab51dac1", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "e66511bcb9efc937174b88035d019e7b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529674140.56-153607558337879/source", "state": "file", "uid": 0} >2018-06-22 09:29:01,206 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "ee48fb03297eb703b1954c8852d0f67fab51dac1", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "e66511bcb9efc937174b88035d019e7b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529674140.53-145678371963032/source", "state": "file", "uid": 0} >2018-06-22 09:29:01,232 p=21516 u=mistral | TASK [Run puppet host configuration for step 4] ******************************** >2018-06-22 09:29:16,864 p=21516 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:29:17,006 p=21516 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:29:20,353 p=21516 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:29:20,377 p=21516 u=mistral | TASK [Debug output for task which failed: Run puppet host configuration for step 4] *** >2018-06-22 09:29:20,431 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.15 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller4]/ensure: created", > "Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{md5}8307434bc8ed4e2a7df4928fb4232778' to '{md5}07ec1abc27973b8fd5c9b075f7a0d3e7'", > "Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{md5}e914149a715dc82812a989314c026305' to '{md5}1483b6eecf3d4796dac2df692d603719'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{md5}913e2613413a45daa402d0fbdbaba676' to '{md5}0f92e52f70b5c64864657201eb9581bb'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{md5}4496fd5e0e88e764e7beb1ae8f0dda6a' to '{md5}01f68b1480c1ec4e3cc125434dd612a0'", > "Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully", > "Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: Applied catalog in 8.95 seconds", > "Changes:", > " Total: 8", > "Events:", > " Success: 8", > "Resources:", > " Corrective change: 1", > " Restarted: 1", > " Total: 226", > " Out of sync: 8", > " Changed: 8", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " File line: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " Augeas: 0.01", > " Firewall: 0.02", > " Pcmk property: 0.36", > " Pcmk resource default: 0.36", > " Package: 0.37", > " File: 0.38", > " Service: 0.52", > " Total: 11.66", > " Last run: 1529674159", > " Config retrieval: 3.73", > " Exec: 5.88", > " Concat fragment: 0.00", > "Version:", > " Config: 1529674147", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 39]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:29:20,460 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.13 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute4]/ensure: created", > "Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{md5}8307434bc8ed4e2a7df4928fb4232778' to '{md5}b8679e66e4642913f577b4fd919d02c8'", > "Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{md5}e914149a715dc82812a989314c026305' to '{md5}1483b6eecf3d4796dac2df692d603719'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{md5}913e2613413a45daa402d0fbdbaba676' to '{md5}0f92e52f70b5c64864657201eb9581bb'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{md5}4496fd5e0e88e764e7beb1ae8f0dda6a' to '{md5}01f68b1480c1ec4e3cc125434dd612a0'", > "Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully", > "Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: Applied catalog in 6.97 seconds", > "Changes:", > " Total: 8", > "Events:", > " Success: 8", > "Resources:", > " Corrective change: 1", > " Restarted: 1", > " Total: 150", > " Out of sync: 8", > " Changed: 8", > "Time:", > " Filebucket: 0.00", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Package manifest: 0.00", > " Sysctl: 0.01", > " Augeas: 0.01", > " Firewall: 0.06", > " Sysctl runtime: 0.07", > " File: 0.20", > " Package: 0.25", > " Service: 0.49", > " Last run: 1529674156", > " Config retrieval: 2.60", > " Exec: 5.27", > " Total: 8.96", > "Version:", > " Config: 1529674147", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 37]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:29:20,483 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 2.21 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage4]/ensure: created", > "Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{md5}8307434bc8ed4e2a7df4928fb4232778' to '{md5}28f915a47931d23afe147c7088a6a935'", > "Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{md5}e914149a715dc82812a989314c026305' to '{md5}1483b6eecf3d4796dac2df692d603719'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{md5}913e2613413a45daa402d0fbdbaba676' to '{md5}0f92e52f70b5c64864657201eb9581bb'", > "Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{md5}4496fd5e0e88e764e7beb1ae8f0dda6a' to '{md5}01f68b1480c1ec4e3cc125434dd612a0'", > "Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully", > "Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: Applied catalog in 6.94 seconds", > "Changes:", > " Total: 8", > "Events:", > " Success: 8", > "Resources:", > " Corrective change: 1", > " Restarted: 1", > " Total: 144", > " Out of sync: 8", > " Changed: 8", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Schedule: 0.00", > " Cron: 0.00", > " Anchor: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.01", > " File: 0.20", > " Package: 0.25", > " Service: 0.53", > " Last run: 1529674156", > " Config retrieval: 2.65", > " Exec: 5.27", > " Filebucket: 0.00", > " Total: 8.93", > "Version:", > " Config: 1529674147", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 37]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:29:20,507 p=21516 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 4] ***************** >2018-06-22 09:29:20,535 p=21516 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:20,561 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:20,572 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:20,596 p=21516 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 4] *** >2018-06-22 09:29:20,627 p=21516 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:29:20,652 p=21516 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:29:20,663 p=21516 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:29:20,684 p=21516 u=mistral | TASK [Start containers for step 4] ********************************************* >2018-06-22 09:29:21,641 p=21516 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:24,947 p=21516 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:49,282 p=21516 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:49,311 p=21516 u=mistral | TASK [Debug output for task which failed: Start containers for step 4] ********* >2018-06-22 09:29:49,437 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: 0889773fb89fe54a86660bf9c732ad5274100de3e467ec46257b601fd9ece165", > "", > "stderr: " > ] >} >2018-06-22 09:29:49,477 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-evaluator ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-evaluator", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "cb7d08d4cc0c: Already exists", > "ee85156498b3: Pulling fs layer", > "ee85156498b3: Verifying Checksum", > "ee85156498b3: Download complete", > "ee85156498b3: Pull complete", > "Digest: sha256:ea8f91c94969dd9ddfe978bf52c432130b41bac65c0af6518a32d7e852d269a2", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-listener ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-listener", > "87891b1a71a5: Pulling fs layer", > "87891b1a71a5: Verifying Checksum", > "87891b1a71a5: Download complete", > "87891b1a71a5: Pull complete", > "Digest: sha256:68015593b63e00bc1f41e3b446a2019b6cffda8cce39f0e4ce7cb3237fe8fbfa", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-notifier ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-notifier", > "61805ac4d2bb: Pulling fs layer", > "61805ac4d2bb: Verifying Checksum", > "61805ac4d2bb: Download complete", > "61805ac4d2bb: Pull complete", > "Digest: sha256:5e596d490bf916b566d26a0b70e03198e6c839cf46c5129e60f1d68bbe71f920", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent", > "ea1d509b6f44: Already exists", > "75b3c56ec939: Pulling fs layer", > "75b3c56ec939: Verifying Checksum", > "75b3c56ec939: Download complete", > "75b3c56ec939: Pull complete", > "Digest: sha256:f43a960bfd0618ddcf4868f48fec217cfdc26bcef8ede696de8adc8a199ecead", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent", > "6f5e633fcce0: Pulling fs layer", > "6f5e633fcce0: Verifying Checksum", > "6f5e633fcce0: Download complete", > "6f5e633fcce0: Pull complete", > "Digest: sha256:d402d9bde0a474496dcf1d33bb766f7a1cffafda7f30b4bd8560817d018504b7", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-conductor ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-conductor", > "0e3031608420: Already exists", > "102127465b5b: Pulling fs layer", > "102127465b5b: Download complete", > "102127465b5b: Pull complete", > "Digest: sha256:7171aef8364a04f7a40d17ba59f63fd8f829e6b97ecb65d8af1688eb7065fda3", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-consoleauth ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-consoleauth", > "01d1d7c271b1: Pulling fs layer", > "01d1d7c271b1: Verifying Checksum", > "01d1d7c271b1: Download complete", > "01d1d7c271b1: Pull complete", > "Digest: sha256:b59d74a9873a382616c808ddf544ef63af4c4e299e3418e26288adec644b5ddf", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-novncproxy ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-novncproxy", > "86edb10b8a50: Pulling fs layer", > "86edb10b8a50: Verifying Checksum", > "86edb10b8a50: Download complete", > "86edb10b8a50: Pull complete", > "Digest: sha256:3572771c07ea7a5e47193b304b86a473fdc30bef4d492721543f3012f4742888", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-scheduler ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-scheduler", > "c8d634287ee3: Pulling fs layer", > "c8d634287ee3: Verifying Checksum", > "c8d634287ee3: Download complete", > "c8d634287ee3: Pull complete", > "Digest: sha256:8bfb59b1ea5b1cb2ffc43f439c339130459894023559c08c098e330431c1a354", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-sahara-engine ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-sahara-engine", > "6c5f7e9a0fe8: Already exists", > "88f0d8af6f23: Pulling fs layer", > "88f0d8af6f23: Verifying Checksum", > "88f0d8af6f23: Download complete", > "88f0d8af6f23: Pull complete", > "Digest: sha256:b9616a7c1034521973cc62e436571a633357d8e56c7416596b98b79e748ebc08", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-container ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-container", > "a98c7da29d65: Already exists", > "9a05208e5890: Pulling fs layer", > "9a05208e5890: Verifying Checksum", > "9a05208e5890: Download complete", > "9a05208e5890: Pull complete", > "Digest: sha256:93e7998c9d7e6afec02cd695a20f85eae7f99167fd2fa66775b2601f8352a55f", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-object ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-object", > "d02d689952a8: Pulling fs layer", > "d02d689952a8: Verifying Checksum", > "d02d689952a8: Download complete", > "d02d689952a8: Pull complete", > "Digest: sha256:64f2e726823838ab7dae517e0e0e72158642c6a54e4126b476c22ed538e4c660", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", > "stdout: bc1dd72c5eb9774da4907e7e297cc84e660932831ab9090ad591215901927e5d", > "stdout: 6c5e6524ae40b160cdc5a3215795031908dfd5547127ea632eb82750e025696e", > "stdout: 3a0dbd831db79f42188a58c01218f253248e6952b98a2025cfcbfa5e07b48f14", > "stdout: 01e4e89cb68ff96b8af20b998bc1a161912053fa4013eb3e238bf63db6af0d74", > "stdout: d22f060f38a5685c9b8098685d81563124009bc9a697fb4432526dba4faea459", > "stdout: 23e24211dfbcdf615865b8ae1471182e02a6a10e479a671d3c1db6fb08c82101", > "stdout: 8bdee9c486e4d10d360a2fce4bdbda66dbe6d3b3bfd9ce20dfbe5d649e8cef49", > "stdout: 972a45a82f0cd50d0cfcca9256949f2ba12f33d7e56ceb49853eb95b6184f62a", > "stdout: f9ed788d05e6bf6faed348f3eb41e4493397b4a411a9c8591ad7f8ddffd35879", > "stdout: b6e948f7cb906374fd32b2cc88f58c68f43fc86f7ffc2678053b7e3ad474f55a", > "stdout: 56edaaff700b560ab284e3b697f2ea702149d8229827ec786bfb172097233f30", > "stdout: f40a00a188046640f88eabacd6378f6bca25606ae6583b103f614b1049ec6f75", > "stdout: Running command: '/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128'", > "stderr: + sudo -E kolla_set_configs", > "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Deleting /etc/gnocchi/gnocchi.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/gnocchi/gnocchi.conf to /etc/gnocchi/gnocchi.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/10-gnocchi_wsgi.conf to /etc/httpd/conf.d/10-gnocchi_wsgi.conf", > "INFO:__main__:Deleting /etc/httpd/conf.d/ssl.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.d/ssl.conf to /etc/httpd/conf.d/ssl.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/access_compat.load to /etc/httpd/conf.modules.d/access_compat.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/actions.load to /etc/httpd/conf.modules.d/actions.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/alias.conf to /etc/httpd/conf.modules.d/alias.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/alias.load to /etc/httpd/conf.modules.d/alias.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/auth_basic.load to /etc/httpd/conf.modules.d/auth_basic.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/auth_digest.load to /etc/httpd/conf.modules.d/auth_digest.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_anon.load to /etc/httpd/conf.modules.d/authn_anon.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_core.load to /etc/httpd/conf.modules.d/authn_core.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_dbm.load to /etc/httpd/conf.modules.d/authn_dbm.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authn_file.load to /etc/httpd/conf.modules.d/authn_file.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_core.load to /etc/httpd/conf.modules.d/authz_core.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_dbm.load to /etc/httpd/conf.modules.d/authz_dbm.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_groupfile.load to /etc/httpd/conf.modules.d/authz_groupfile.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_host.load to /etc/httpd/conf.modules.d/authz_host.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_owner.load to /etc/httpd/conf.modules.d/authz_owner.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/authz_user.load to /etc/httpd/conf.modules.d/authz_user.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/autoindex.conf to /etc/httpd/conf.modules.d/autoindex.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/autoindex.load to /etc/httpd/conf.modules.d/autoindex.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/cache.load to /etc/httpd/conf.modules.d/cache.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/cgi.load to /etc/httpd/conf.modules.d/cgi.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav.load to /etc/httpd/conf.modules.d/dav.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav_fs.conf to /etc/httpd/conf.modules.d/dav_fs.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dav_fs.load to /etc/httpd/conf.modules.d/dav_fs.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/deflate.conf to /etc/httpd/conf.modules.d/deflate.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/deflate.load to /etc/httpd/conf.modules.d/deflate.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dir.conf to /etc/httpd/conf.modules.d/dir.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/dir.load to /etc/httpd/conf.modules.d/dir.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/env.load to /etc/httpd/conf.modules.d/env.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/expires.load to /etc/httpd/conf.modules.d/expires.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/ext_filter.load to /etc/httpd/conf.modules.d/ext_filter.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/filter.load to /etc/httpd/conf.modules.d/filter.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/include.load to /etc/httpd/conf.modules.d/include.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/log_config.load to /etc/httpd/conf.modules.d/log_config.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/logio.load to /etc/httpd/conf.modules.d/logio.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime.conf to /etc/httpd/conf.modules.d/mime.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime.load to /etc/httpd/conf.modules.d/mime.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime_magic.conf to /etc/httpd/conf.modules.d/mime_magic.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/mime_magic.load to /etc/httpd/conf.modules.d/mime_magic.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/negotiation.conf to /etc/httpd/conf.modules.d/negotiation.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/negotiation.load to /etc/httpd/conf.modules.d/negotiation.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/prefork.conf to /etc/httpd/conf.modules.d/prefork.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/prefork.load to /etc/httpd/conf.modules.d/prefork.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/rewrite.load to /etc/httpd/conf.modules.d/rewrite.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/setenvif.conf to /etc/httpd/conf.modules.d/setenvif.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/setenvif.load to /etc/httpd/conf.modules.d/setenvif.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/socache_shmcb.load to /etc/httpd/conf.modules.d/socache_shmcb.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/speling.load to /etc/httpd/conf.modules.d/speling.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/ssl.load to /etc/httpd/conf.modules.d/ssl.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/status.conf to /etc/httpd/conf.modules.d/status.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/status.load to /etc/httpd/conf.modules.d/status.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/substitute.load to /etc/httpd/conf.modules.d/substitute.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/suexec.load to /etc/httpd/conf.modules.d/suexec.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/systemd.load to /etc/httpd/conf.modules.d/systemd.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/unixd.load to /etc/httpd/conf.modules.d/unixd.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/usertrack.load to /etc/httpd/conf.modules.d/usertrack.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/version.load to /etc/httpd/conf.modules.d/version.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/vhost_alias.load to /etc/httpd/conf.modules.d/vhost_alias.load", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/wsgi.conf to /etc/httpd/conf.modules.d/wsgi.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf.modules.d/wsgi.load to /etc/httpd/conf.modules.d/wsgi.load", > "INFO:__main__:Deleting /etc/httpd/conf/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf/httpd.conf to /etc/httpd/conf/httpd.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/httpd/conf/ports.conf to /etc/httpd/conf/ports.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/tripleo.cnf to /etc/my.cnf.d/tripleo.cnf", > "INFO:__main__:Creating directory /etc/systemd/system/httpd.service.d", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/systemd/system/httpd.service.d/httpd.conf to /etc/systemd/system/httpd.service.d/httpd.conf", > "INFO:__main__:Creating directory /var/www/cgi-bin/gnocchi", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/var/www/cgi-bin/gnocchi/app to /var/www/cgi-bin/gnocchi/app", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.conf to /etc/ceph/ceph.conf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.admin.keyring to /etc/ceph/ceph.client.admin.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.mon.keyring to /etc/ceph/ceph.mon.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.mgr.controller-0.keyring to /etc/ceph/ceph.mgr.controller-0.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.manila.keyring to /etc/ceph/ceph.client.manila.keyring", > "INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.radosgw.keyring to /etc/ceph/ceph.client.radosgw.keyring", > "INFO:__main__:Writing out command to execute", > "INFO:__main__:Setting permission for /var/log/gnocchi", > "INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring", > "++ cat /run_command", > "+ CMD='/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128'", > "+ ARGS=", > "+ [[ ! -n '' ]]", > "+ . kolla_extend_start", > "++ GNOCCHI_LOG_DIR=/var/log/kolla/gnocchi", > "++ [[ ! -d /var/log/kolla/gnocchi ]]", > "++ mkdir -p /var/log/kolla/gnocchi", > "+++ stat -c %U:%G /var/log/kolla/gnocchi", > "++ [[ root:kolla != \\g\\n\\o\\c\\c\\h\\i\\:\\k\\o\\l\\l\\a ]]", > "++ chown gnocchi:kolla /var/log/kolla/gnocchi", > "+++ stat -c %a /var/log/kolla/gnocchi", > "++ [[ 2755 != \\7\\5\\5 ]]", > "++ chmod 755 /var/log/kolla/gnocchi", > "++ . /usr/local/bin/kolla_gnocchi_extend_start", > "+++ [[ rhel =~ debian|ubuntu ]]", > "+++ rm -rf /var/run/httpd/htcacheclean /run/httpd/htcacheclean '/tmp/httpd*'", > "+++ [[ -n '' ]]", > "+ echo 'Running command: '\\''/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128'\\'''", > "+ exec /usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", > "2018-06-22 13:29:33,950 [21] WARNING oslo_config.cfg: Option \"coordination_url\" from group \"storage\" is deprecated. Use option \"coordination_url\" from group \"DEFAULT\".", > "2018-06-22 13:29:33,950 [21] INFO gnocchi.service: Gnocchi version 4.2.5", > "2018-06-22 13:29:33,950 [21] DEBUG gnocchi.service: ********************************************************************************", > "2018-06-22 13:29:33,950 [21] DEBUG gnocchi.service: Configuration options gathered from:", > "2018-06-22 13:29:33,951 [21] DEBUG gnocchi.service: command line args: ['--sacks-number=128']", > "2018-06-22 13:29:33,951 [21] DEBUG gnocchi.service: config files: ['/usr/share/gnocchi/gnocchi-dist.conf', '/etc/gnocchi/gnocchi.conf']", > "2018-06-22 13:29:33,951 [21] DEBUG gnocchi.service: ================================================================================", > "2018-06-22 13:29:33,951 [21] DEBUG gnocchi.service: config_dir = []", > "2018-06-22 13:29:33,951 [21] DEBUG gnocchi.service: config_file = ['/usr/share/gnocchi/gnocchi-dist.conf', '/etc/gnocchi/gnocchi.conf']", > "2018-06-22 13:29:33,951 [21] DEBUG gnocchi.service: coordination_url = ****", > "2018-06-22 13:29:33,951 [21] DEBUG gnocchi.service: debug = True", > "2018-06-22 13:29:33,952 [21] DEBUG gnocchi.service: log_dir = /var/log/gnocchi", > "2018-06-22 13:29:33,952 [21] DEBUG gnocchi.service: log_file = None", > "2018-06-22 13:29:33,952 [21] DEBUG gnocchi.service: parallel_operations = 8", > "2018-06-22 13:29:33,952 [21] DEBUG gnocchi.service: sacks_number = 128", > "2018-06-22 13:29:33,952 [21] DEBUG gnocchi.service: skip_archive_policies_creation = False", > "2018-06-22 13:29:33,952 [21] DEBUG gnocchi.service: skip_incoming = False", > "2018-06-22 13:29:33,952 [21] DEBUG gnocchi.service: skip_index = False", > "2018-06-22 13:29:33,952 [21] DEBUG gnocchi.service: skip_storage = False", > "2018-06-22 13:29:33,953 [21] DEBUG gnocchi.service: syslog_log_facility = user", > "2018-06-22 13:29:33,953 [21] DEBUG gnocchi.service: use_journal = False", > "2018-06-22 13:29:33,953 [21] DEBUG gnocchi.service: use_syslog = False", > "2018-06-22 13:29:33,953 [21] DEBUG gnocchi.service: verbose = True", > "2018-06-22 13:29:33,953 [21] DEBUG gnocchi.service: statsd.archive_policy_name = low", > "2018-06-22 13:29:33,953 [21] DEBUG gnocchi.service: statsd.creator = None", > "2018-06-22 13:29:33,953 [21] DEBUG gnocchi.service: statsd.flush_delay = 10.0", > "2018-06-22 13:29:33,953 [21] DEBUG gnocchi.service: statsd.host = 0.0.0.0", > "2018-06-22 13:29:33,953 [21] DEBUG gnocchi.service: statsd.port = 8125", > "2018-06-22 13:29:33,954 [21] DEBUG gnocchi.service: statsd.resource_id = 0a8b55df-f90f-491c-8cb9-7cdecec6fc26", > "2018-06-22 13:29:33,954 [21] DEBUG gnocchi.service: incoming.ceph_conffile = /etc/ceph/ceph.conf", > "2018-06-22 13:29:33,954 [21] DEBUG gnocchi.service: incoming.ceph_keyring = /etc/ceph/ceph.client.openstack.keyring", > "2018-06-22 13:29:33,954 [21] DEBUG gnocchi.service: incoming.ceph_pool = metrics", > "2018-06-22 13:29:33,955 [21] DEBUG gnocchi.service: incoming.ceph_secret = ****", > "2018-06-22 13:29:33,955 [21] DEBUG gnocchi.service: incoming.ceph_timeout = 30", > "2018-06-22 13:29:33,955 [21] DEBUG gnocchi.service: incoming.ceph_username = openstack", > "2018-06-22 13:29:33,956 [21] DEBUG gnocchi.service: incoming.driver = redis", > "2018-06-22 13:29:33,956 [21] DEBUG gnocchi.service: incoming.file_basepath = /var/lib/gnocchi", > "2018-06-22 13:29:33,956 [21] DEBUG gnocchi.service: incoming.redis_url = redis://:hE8opX2LrXtwZRhh8LLr1rirM@172.17.1.11:6379/", > "2018-06-22 13:29:33,956 [21] DEBUG gnocchi.service: incoming.s3_access_key_id = ", > "2018-06-22 13:29:33,956 [21] DEBUG gnocchi.service: incoming.s3_bucket_prefix = gnocchi", > "2018-06-22 13:29:33,957 [21] DEBUG gnocchi.service: incoming.s3_check_consistency_timeout = 60.0", > "2018-06-22 13:29:33,957 [21] DEBUG gnocchi.service: incoming.s3_endpoint_url = ", > "2018-06-22 13:29:33,957 [21] DEBUG gnocchi.service: incoming.s3_max_pool_connections = 50", > "2018-06-22 13:29:33,957 [21] DEBUG gnocchi.service: incoming.s3_region_name = ", > "2018-06-22 13:29:33,958 [21] DEBUG gnocchi.service: incoming.s3_secret_access_key = ", > "2018-06-22 13:29:33,958 [21] DEBUG gnocchi.service: incoming.swift_auth_insecure = False", > "2018-06-22 13:29:33,958 [21] DEBUG gnocchi.service: incoming.swift_auth_version = 1", > "2018-06-22 13:29:33,958 [21] DEBUG gnocchi.service: incoming.swift_authurl = http://localhost:8080/auth/v1.0", > "2018-06-22 13:29:33,959 [21] DEBUG gnocchi.service: incoming.swift_cacert = ", > "2018-06-22 13:29:33,959 [21] DEBUG gnocchi.service: incoming.swift_container_prefix = gnocchi", > "2018-06-22 13:29:33,959 [21] DEBUG gnocchi.service: incoming.swift_endpoint_type = publicURL", > "2018-06-22 13:29:33,959 [21] DEBUG gnocchi.service: incoming.swift_key = ****", > "2018-06-22 13:29:33,960 [21] DEBUG gnocchi.service: incoming.swift_preauthtoken = ****", > "2018-06-22 13:29:33,960 [21] DEBUG gnocchi.service: incoming.swift_project_domain_name = Default", > "2018-06-22 13:29:33,960 [21] DEBUG gnocchi.service: incoming.swift_project_name = ", > "2018-06-22 13:29:33,960 [21] DEBUG gnocchi.service: incoming.swift_region = ", > "2018-06-22 13:29:33,961 [21] DEBUG gnocchi.service: incoming.swift_service_type = object-store", > "2018-06-22 13:29:33,961 [21] DEBUG gnocchi.service: incoming.swift_timeout = 300", > "2018-06-22 13:29:33,961 [21] DEBUG gnocchi.service: incoming.swift_url = ", > "2018-06-22 13:29:33,961 [21] DEBUG gnocchi.service: incoming.swift_user = admin:admin", > "2018-06-22 13:29:33,962 [21] DEBUG gnocchi.service: incoming.swift_user_domain_name = Default", > "2018-06-22 13:29:33,962 [21] DEBUG gnocchi.service: metricd.greedy = True", > "2018-06-22 13:29:33,962 [21] DEBUG gnocchi.service: metricd.metric_cleanup_delay = 300", > "2018-06-22 13:29:33,962 [21] DEBUG gnocchi.service: metricd.metric_processing_delay = 30", > "2018-06-22 13:29:33,962 [21] DEBUG gnocchi.service: metricd.metric_reporting_delay = 120", > "2018-06-22 13:29:33,962 [21] DEBUG gnocchi.service: metricd.processing_replicas = 3", > "2018-06-22 13:29:33,962 [21] DEBUG gnocchi.service: metricd.workers = 4", > "2018-06-22 13:29:33,963 [21] DEBUG gnocchi.service: database.backend = sqlalchemy", > "2018-06-22 13:29:33,963 [21] DEBUG gnocchi.service: database.connection = ****", > "2018-06-22 13:29:33,963 [21] DEBUG gnocchi.service: database.connection_debug = 0", > "2018-06-22 13:29:33,963 [21] DEBUG gnocchi.service: database.connection_parameters = ", > "2018-06-22 13:29:33,963 [21] DEBUG gnocchi.service: database.connection_recycle_time = 3600", > "2018-06-22 13:29:33,963 [21] DEBUG gnocchi.service: database.connection_trace = False", > "2018-06-22 13:29:33,963 [21] DEBUG gnocchi.service: database.db_inc_retry_interval = True", > "2018-06-22 13:29:33,963 [21] DEBUG gnocchi.service: database.db_max_retries = 20", > "2018-06-22 13:29:33,964 [21] DEBUG gnocchi.service: database.db_max_retry_interval = 10", > "2018-06-22 13:29:33,964 [21] DEBUG gnocchi.service: database.db_retry_interval = 1", > "2018-06-22 13:29:33,964 [21] DEBUG gnocchi.service: database.max_overflow = 50", > "2018-06-22 13:29:33,964 [21] DEBUG gnocchi.service: database.max_pool_size = 5", > "2018-06-22 13:29:33,964 [21] DEBUG gnocchi.service: database.max_retries = 10", > "2018-06-22 13:29:33,964 [21] DEBUG gnocchi.service: database.min_pool_size = 1", > "2018-06-22 13:29:33,964 [21] DEBUG gnocchi.service: database.mysql_enable_ndb = False", > "2018-06-22 13:29:33,965 [21] DEBUG gnocchi.service: database.mysql_sql_mode = TRADITIONAL", > "2018-06-22 13:29:33,965 [21] DEBUG gnocchi.service: database.pool_timeout = None", > "2018-06-22 13:29:33,965 [21] DEBUG gnocchi.service: database.retry_interval = 10", > "2018-06-22 13:29:33,965 [21] DEBUG gnocchi.service: database.slave_connection = ****", > "2018-06-22 13:29:33,965 [21] DEBUG gnocchi.service: database.sqlite_synchronous = True", > "2018-06-22 13:29:33,965 [21] DEBUG gnocchi.service: database.use_db_reconnect = False", > "2018-06-22 13:29:33,965 [21] DEBUG gnocchi.service: storage.ceph_conffile = /etc/ceph/ceph.conf", > "2018-06-22 13:29:33,965 [21] DEBUG gnocchi.service: storage.ceph_keyring = /etc/ceph/ceph.client.openstack.keyring", > "2018-06-22 13:29:33,965 [21] DEBUG gnocchi.service: storage.ceph_pool = metrics", > "2018-06-22 13:29:33,966 [21] DEBUG gnocchi.service: storage.ceph_secret = ****", > "2018-06-22 13:29:33,966 [21] DEBUG gnocchi.service: storage.ceph_timeout = 30", > "2018-06-22 13:29:33,966 [21] DEBUG gnocchi.service: storage.ceph_username = openstack", > "2018-06-22 13:29:33,966 [21] DEBUG gnocchi.service: storage.driver = ceph", > "2018-06-22 13:29:33,966 [21] DEBUG gnocchi.service: storage.file_basepath = /var/lib/gnocchi", > "2018-06-22 13:29:33,966 [21] DEBUG gnocchi.service: storage.redis_url = redis://localhost:6379/", > "2018-06-22 13:29:33,966 [21] DEBUG gnocchi.service: storage.s3_access_key_id = None", > "2018-06-22 13:29:33,966 [21] DEBUG gnocchi.service: storage.s3_bucket_prefix = gnocchi", > "2018-06-22 13:29:33,966 [21] DEBUG gnocchi.service: storage.s3_check_consistency_timeout = 60.0", > "2018-06-22 13:29:33,966 [21] DEBUG gnocchi.service: storage.s3_endpoint_url = None", > "2018-06-22 13:29:33,966 [21] DEBUG gnocchi.service: storage.s3_max_pool_connections = 50", > "2018-06-22 13:29:33,967 [21] DEBUG gnocchi.service: storage.s3_region_name = None", > "2018-06-22 13:29:33,967 [21] DEBUG gnocchi.service: storage.s3_secret_access_key = None", > "2018-06-22 13:29:33,967 [21] DEBUG gnocchi.service: storage.swift_auth_insecure = False", > "2018-06-22 13:29:33,967 [21] DEBUG gnocchi.service: storage.swift_auth_version = 1", > "2018-06-22 13:29:33,967 [21] DEBUG gnocchi.service: storage.swift_authurl = http://localhost:8080/auth/v1.0", > "2018-06-22 13:29:33,967 [21] DEBUG gnocchi.service: storage.swift_cacert = None", > "2018-06-22 13:29:33,967 [21] DEBUG gnocchi.service: storage.swift_container_prefix = gnocchi", > "2018-06-22 13:29:33,967 [21] DEBUG gnocchi.service: storage.swift_endpoint_type = publicURL", > "2018-06-22 13:29:33,967 [21] DEBUG gnocchi.service: storage.swift_key = ****", > "2018-06-22 13:29:33,967 [21] DEBUG gnocchi.service: storage.swift_preauthtoken = ****", > "2018-06-22 13:29:33,967 [21] DEBUG gnocchi.service: storage.swift_project_domain_name = Default", > "2018-06-22 13:29:33,967 [21] DEBUG gnocchi.service: storage.swift_project_name = None", > "2018-06-22 13:29:33,968 [21] DEBUG gnocchi.service: storage.swift_region = None", > "2018-06-22 13:29:33,968 [21] DEBUG gnocchi.service: storage.swift_service_type = object-store", > "2018-06-22 13:29:33,968 [21] DEBUG gnocchi.service: storage.swift_timeout = 300", > "2018-06-22 13:29:33,968 [21] DEBUG gnocchi.service: storage.swift_url = None", > "2018-06-22 13:29:33,968 [21] DEBUG gnocchi.service: storage.swift_user = admin:admin", > "2018-06-22 13:29:33,968 [21] DEBUG gnocchi.service: storage.swift_user_domain_name = Default", > "2018-06-22 13:29:33,968 [21] DEBUG gnocchi.service: indexer.url = ****", > "2018-06-22 13:29:33,968 [21] DEBUG gnocchi.service: api.auth_mode = keystone", > "2018-06-22 13:29:33,968 [21] DEBUG gnocchi.service: api.host = 0.0.0.0", > "2018-06-22 13:29:33,969 [21] DEBUG gnocchi.service: api.max_limit = 1000", > "2018-06-22 13:29:33,969 [21] DEBUG gnocchi.service: api.operation_timeout = 10", > "2018-06-22 13:29:33,969 [21] DEBUG gnocchi.service: api.paste_config = api-paste.ini", > "2018-06-22 13:29:33,969 [21] DEBUG gnocchi.service: api.port = 8041", > "2018-06-22 13:29:33,969 [21] DEBUG gnocchi.service: api.uwsgi_mode = http", > "2018-06-22 13:29:33,969 [21] DEBUG gnocchi.service: archive_policy.default_aggregation_methods = ['mean', 'min', 'max', 'sum', 'std', 'count']", > "2018-06-22 13:29:33,969 [21] DEBUG gnocchi.service: ********************************************************************************", > "2018-06-22 13:29:34,342 [21] INFO gnocchi.cli.manage: Upgrading indexer SQLAlchemyIndexer: mysql+pymysql://gnocchi:FGAKvoxMeTA8uPQ0VS2Ol75Q1@172.17.1.17/gnocchi?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf", > "2018-06-22 13:29:34,552 [21] INFO gnocchi.common.ceph: Ceph storage backend use 'cradox' python library", > "2018-06-22 13:29:34,593 [21] INFO gnocchi.cli.manage: Upgrading storage CephStorage: 53912472-747b-11e8-95a3-5254003d7dcb", > "2018-06-22 13:29:34,594 [21] INFO gnocchi.cli.manage: Upgrading incoming storage RedisStorage: StrictRedis<ConnectionPool<Connection<host=172.17.1.11,port=6379,db=0>>>", > "stdout: 159cd19812646d2c2970d1c468a64b6a3d746022d2516c5467da06e8a15b9e02", > "stdout: 419028caad05a44d4295497d8b6d4e58d24b39f35c8964193c717535e3906867", > "stdout: dd9f1a6f06fa3947a1020c7f833a565ec1e8b4bb762211513536f6164a258f73", > "stdout: fff15b56502e9b4f0ae5d58d82f83c75b53426ac898a2006940a833d87dd1225", > "stdout: 0eec8221ce2d04500083c65ed0213a26649ca552e646ea725c974a8c5c833460", > "stdout: 9ea2f4b25f2e57287ee242b36888ce317948e01eae033559710ef4ab75a72c47", > "stdout: d6664d936979c73c72f8719429ae1d0dd4057d35d51f2301c746eeeb3490ca43", > "stdout: a051117bdc63883ea9a71af5410d873ffabee9844b4bb21bd2eaefcd784ebb12", > "stdout: 566f9afa841499176ff5688d49fda4450fafb6e707031538b9979a27f9c43e22", > "stdout: 7ad187d2d80e16533ce1c69f2d56aa773460248d373e9308e3b522b968156207", > "stdout: 697f0f342bc3ff2a634b7ae1164d0725a8370e6e9741c074e0e63f4e30103012", > "stdout: ea5fcd7a4941a64c2727140a3b711e51205536fcee56971aed0428d210ed0d0f", > "stdout: 1b589cf4e033492f8a4cc4c0aa83700a00aca5c6abde8839c41d0dc11bed521a", > "stdout: 996015a3d108a473298b66e19f914856ad1498669e94e344048b527fda01873d", > "stdout: 083590f08ad43e7071e194e8eab1433f42c13a19584bfd53150926018a611c01", > "stdout: 9f37c086bb8ce1617ba9601c1527745a4c315f77a2162650050d1b7e30473524", > "stdout: 813ff14e6f57845258483e90165c2f7be83fc55258eee7450f1c790a4d234f12", > "stdout: 2948986d0b67bb457caf98072904ae54f5d1aa4bf91cf7595d828da663a1e973", > "stdout: f46cde613cde914e0cdc55f9ff09630b7688ecf5e82e735121403dcec3b57f43", > "stdout: 22fb07058aaba357501f04bbb4ad6d2c00905e2daaf85626c7cb953c41026f6f", > "stdout: 69fb8fab09d4fd072ebb24bb5590ec3c4be7d0df45f57b28096a637fa013b23f", > "stdout: 9fc6c40db0539724207a33b2ee62fe4d8d6b3836b7ced721583d314acc07ec0d", > "stdout: 32520a0855d3e10c16ae5f7c01121a7732ae5a9f5db69513b3529c3a75cf537d", > "stdout: 51d0f7fb8fdf06427f73a59ae78bac798aa64e8d323cb67b6774398852196718", > "stdout: ", > "stdout: e0b247c6a0fccf37821c505f5e1bef4e7ec2f276fdef986d5032ad3f3226bd75", > "stdout: 222ede8f633a5f11d02ac76951407c49f7cc3123993f87054363f6cea4b76464", > "stdout: 16868a3fe4499d68c6865d60bfcb233c8ebf6d24b3f2b71a736f2f86e52e5548", > "stdout: 36bf5a06b4794e6edea980e83a4564585775b15f2b4dde7ca7bd857703809675", > "stdout: 69623ea8f127df554394e22a103e257040d4eb2b7fe7856a5b1577349765a509", > "stdout: ff901654c580224ba9464aadbd58f12d874d1dfe0d934c4495b268ed214eff22", > "stdout: 929bc274dd30e25371d91de7f2bad2329e40eca4afe20c2bfc85406140ffa625", > "stdout: 968ec9392a5b872a13c78d232306ec43c687af7a7ab4b5671556cd523e3a303a", > "stdout: d59829f1c9c603364b3c261725cd8b4273ebadd4734de64867ed61782419576f" > ] >} >2018-06-22 09:29:49,487 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-compute ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-compute", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "333aa6b2b383: Already exists", > "90108de13a14: Pulling fs layer", > "90108de13a14: Verifying Checksum", > "90108de13a14: Download complete", > "90108de13a14: Pull complete", > "Digest: sha256:e645155266de12baafedb66bc71148fb800414967c09c7b078c289ff61b17fb3", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent", > "ea1d509b6f44: Already exists", > "6f5e633fcce0: Pulling fs layer", > "6f5e633fcce0: Verifying Checksum", > "6f5e633fcce0: Download complete", > "6f5e633fcce0: Pull complete", > "Digest: sha256:d402d9bde0a474496dcf1d33bb766f7a1cffafda7f30b4bd8560817d018504b7", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", > "stdout: b0d33faa3dbb166854b1561d9285d26cf51c18722d0d3267da937e836d2d5103", > "stdout: 9de225e901e73b636b0630bc312ce11d8fbf4da48f595c6243ed8e554cd66908", > "stdout: 829b99a91255532409afb86d29b0514af5ee5b2c6f093251c6cb365b115d6c49", > "stdout: Secret 53912472-747b-11e8-95a3-5254003d7dcb created", > "Secret value set", > "stdout: fcdf2750d52bf578189453694ddcf84eda1e98080671a956d4a21148d0354570", > "stdout: cfe13b27b5bb9b1bfeb4ac3178972b443c97f9ac8d0ce12761932ca3610f8307" > ] >} >2018-06-22 09:29:49,514 p=21516 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks4.json exists] ******** >2018-06-22 09:29:49,951 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:29:49,967 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:29:50,255 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:29:50,281 p=21516 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 4] ******************** >2018-06-22 09:29:50,312 p=21516 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:50,339 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:50,352 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:50,376 p=21516 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 4] *** >2018-06-22 09:29:50,408 p=21516 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,436 p=21516 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,452 p=21516 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,458 p=21516 u=mistral | PLAY [External deployment step 5] ********************************************** >2018-06-22 09:29:50,480 p=21516 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-06-22 09:29:50,503 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,522 p=21516 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-06-22 09:29:50,552 p=21516 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,558 p=21516 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,559 p=21516 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/ba9a5c83-0a9e-4fec-9c7c-818ccd0be33e/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,578 p=21516 u=mistral | TASK [generate inventory] ****************************************************** >2018-06-22 09:29:50,597 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,616 p=21516 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-06-22 09:29:50,643 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,665 p=21516 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-06-22 09:29:50,685 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,704 p=21516 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-06-22 09:29:50,724 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,743 p=21516 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-06-22 09:29:50,763 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,781 p=21516 u=mistral | TASK [generate collect nodes uuid playbook] ************************************ >2018-06-22 09:29:50,799 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,818 p=21516 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-06-22 09:29:50,837 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,855 p=21516 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-06-22 09:29:50,874 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,891 p=21516 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-06-22 09:29:50,913 p=21516 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,932 p=21516 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-06-22 09:29:50,950 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:50,967 p=21516 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-06-22 09:29:50,990 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,048 p=21516 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-06-22 09:29:51,068 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,086 p=21516 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-06-22 09:29:51,104 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,122 p=21516 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-06-22 09:29:51,141 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,158 p=21516 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-06-22 09:29:51,176 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,194 p=21516 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-06-22 09:29:51,212 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,230 p=21516 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-06-22 09:29:51,247 p=21516 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,253 p=21516 u=mistral | PLAY [Overcloud deploy step tasks for 5] *************************************** >2018-06-22 09:29:51,278 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:29:51,309 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,340 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,352 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,375 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:29:51,404 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,431 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,442 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,465 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:29:51,492 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,519 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,532 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,555 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:29:51,584 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,616 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,627 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,650 p=21516 u=mistral | TASK [include_role] ************************************************************ >2018-06-22 09:29:51,679 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,703 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,719 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,724 p=21516 u=mistral | PLAY [Overcloud common deploy step tasks 5] ************************************ >2018-06-22 09:29:51,751 p=21516 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-06-22 09:29:51,780 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,806 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,818 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,840 p=21516 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-06-22 09:29:51,868 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,896 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,908 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,932 p=21516 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-06-22 09:29:51,962 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:51,988 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,000 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,023 p=21516 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-06-22 09:29:52,051 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,078 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,090 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,112 p=21516 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-06-22 09:29:52,139 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,163 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,181 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,207 p=21516 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-06-22 09:29:52,235 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,260 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,273 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,296 p=21516 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-06-22 09:29:52,352 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': 'nova_api_discover_hosts.sh'}) => {"changed": false, "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,353 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': 'create_swift_secret.sh'}) => {"changed": false, "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,358 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,359 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': 'neutron_ovs_agent_launcher.sh'}) => {"changed": false, "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,360 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': 'set_swift_keymaster_key_id.sh'}) => {"changed": false, "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,363 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': u'docker_puppet_apply.sh'}) => {"changed": false, "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,365 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": false, "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,391 p=21516 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-06-22 09:29:52,421 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,422 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,451 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,453 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,453 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,454 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,457 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,457 p=21516 u=mistral | skipping: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,460 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,461 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,464 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,469 p=21516 u=mistral | skipping: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,480 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,486 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,491 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,494 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,500 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,504 p=21516 u=mistral | skipping: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,530 p=21516 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-06-22 09:29:52,559 p=21516 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,583 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,597 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:29:52,620 p=21516 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-06-22 09:29:52,650 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,676 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,689 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,711 p=21516 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-06-22 09:29:52,767 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,785 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_libvirt': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,787 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,790 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,792 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,795 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '53912472-747b-11e8-95a3-5254003d7dcb' --base64 'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '53912472-747b-11e8-95a3-5254003d7dcb' --base64 'AQB2NypbAAAAABAAQlplrtVnqnJzdcaHgTJsOA=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,798 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,825 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'memcached_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log'], 'user': u'root', 'volumes': [u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'detach': False, 'privileged': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=8omuhCCcfP1YuJzPZS8tLp3AL', u'DB_ROOT_PASSWORD=zeHIZe0ICg'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=n8jIt9appI3hU5NXoG3W'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro', u'/var/log/containers/memcached:/var/log/'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": false, "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached.log 2>&1"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "memcached_init_logs": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; touch /var/log/memcached.log && chown ${USER} /var/log/memcached.log"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro", "/var/log/containers/memcached:/var/log/"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=8omuhCCcfP1YuJzPZS8tLp3AL", "DB_ROOT_PASSWORD=zeHIZe0ICg"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=n8jIt9appI3hU5NXoG3W"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,844 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'6CLNy5Ewot5UhcBYmt27oGDMD'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "6CLNy5Ewot5UhcBYmt27oGDMD"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,866 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-06-19.4", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,872 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,872 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": false, "item": {"key": "step_2", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,873 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,877 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_statsd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'gnocchi_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1529672056'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}}, 'key': u'step_5'}) => {"changed": false, "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 5; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-06-19.4", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1529672056"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,900 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-06-19.4", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-06-19.4", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-06-19.4", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-06-19.4", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-06-19.4", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,916 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,974 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": false, "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-06-19.4", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,975 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": false, "item": {"key": "step_6", "value": {}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:52,999 p=21516 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-06-22 09:29:53,060 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,061 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,073 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,096 p=21516 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-06-22 09:29:53,172 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,176 p=21516 u=mistral | skipping: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,179 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,180 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,184 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': '/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,192 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': '/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,195 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,202 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/var/lib/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/var/lib/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,206 p=21516 u=mistral | skipping: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,297 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': '/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,302 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/keystone.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,307 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,311 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,315 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,321 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,325 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,330 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,337 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,340 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': '/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,345 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/haproxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,350 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,354 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,361 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,366 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,372 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/redis.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,374 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,382 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/glance_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,385 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,390 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,396 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,401 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': '/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,408 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,415 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': '/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,469 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,474 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,480 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,483 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,490 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': '/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,494 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/mysql.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,500 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,505 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,512 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,516 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,519 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,527 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,530 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,534 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,539 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,544 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,552 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,555 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,567 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,568 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,571 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,577 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,581 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,594 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,595 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,598 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/nova_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,606 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,611 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': '/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,622 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,627 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,635 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,644 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/panko_api.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,645 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,648 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,659 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,662 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': '/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,667 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': '/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,675 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': '/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,686 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': '/var/lib/kolla/config_files/iscsid.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,688 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': '/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,692 p=21516 u=mistral | skipping: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': '/var/lib/kolla/config_files/horizon.json'}) => {"changed": false, "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,747 p=21516 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-06-22 09:29:53,759 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:29:53,784 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:29:53,810 p=21516 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-06-22 09:29:53,834 p=21516 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-06-22 09:29:53,890 p=21516 u=mistral | skipping: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4'}], 'key': u'step_3'}) => {"changed": false, "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-06-19.4", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,927 p=21516 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-06-22 09:29:53,954 p=21516 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,978 p=21516 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:53,992 p=21516 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-06-22 09:29:54,013 p=21516 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-06-22 09:29:54,848 p=21516 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "039e0b234f00fbd1242930f0d5dc67e8b4c067fe", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "868a394a237b10c579b0c7ac25057be6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529674194.18-142032242631918/source", "state": "file", "uid": 0} >2018-06-22 09:29:54,850 p=21516 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "039e0b234f00fbd1242930f0d5dc67e8b4c067fe", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "868a394a237b10c579b0c7ac25057be6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529674194.14-263103784240152/source", "state": "file", "uid": 0} >2018-06-22 09:29:54,966 p=21516 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "039e0b234f00fbd1242930f0d5dc67e8b4c067fe", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "868a394a237b10c579b0c7ac25057be6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1529674194.11-240747046901577/source", "state": "file", "uid": 0} >2018-06-22 09:29:55,027 p=21516 u=mistral | TASK [Run puppet host configuration for step 5] ******************************** >2018-06-22 09:30:05,684 p=21516 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:30:06,036 p=21516 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:30:13,231 p=21516 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-06-22 09:30:13,254 p=21516 u=mistral | TASK [Debug output for task which failed: Run puppet host configuration for step 5] *** >2018-06-22 09:30:13,312 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.28 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller5]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 5.02 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Changed: 2", > " Out of sync: 2", > " Total: 226", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " File line: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " User: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.00", > " Sysctl: 0.01", > " Augeas: 0.02", > " Firewall: 0.02", > " Service: 0.21", > " Pcmk property: 0.41", > " Package: 0.47", > " Pcmk resource default: 0.47", > " Exec: 1.01", > " File: 1.16", > " Last run: 1529674212", > " Config retrieval: 4.73", > " Total: 8.52", > " Concat fragment: 0.00", > "Version:", > " Config: 1529674203", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 39]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:30:13,339 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.95 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute5]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.42 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 150", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Concat fragment: 0.00", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Sysctl: 0.00", > " Sysctl runtime: 0.00", > " Firewall: 0.01", > " Augeas: 0.01", > " File: 0.14", > " Service: 0.14", > " Exec: 0.25", > " Package: 0.27", > " Last run: 1529674205", > " Config retrieval: 2.33", > " Total: 3.16", > " Filebucket: 0.00", > "Version:", > " Config: 1529674201", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 37]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:30:13,361 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 2.47 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage5]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: Applied catalog in 1.66 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Corrective change: 1", > " Total: 144", > " Out of sync: 2", > " Changed: 2", > "Time:", > " Concat file: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Package manifest: 0.00", > " Sysctl runtime: 0.01", > " Sysctl: 0.01", > " Firewall: 0.01", > " Augeas: 0.02", > " File: 0.06", > " Service: 0.16", > " Exec: 0.29", > " Package: 0.31", > " Last run: 1529674205", > " Config retrieval: 2.90", > " Total: 3.76", > " Filebucket: 0.00", > " Concat fragment: 0.00", > "Version:", > " Config: 1529674201", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/snmp/manifests/params.pp\", 310]:[\"/var/lib/tripleo-config/puppet_step_config.pp\", 37]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 140]:" > ] >} >2018-06-22 09:30:13,391 p=21516 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 5] ***************** >2018-06-22 09:30:13,426 p=21516 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:30:13,457 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:30:13,471 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:30:13,500 p=21516 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 5] *** >2018-06-22 09:30:13,535 p=21516 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:30:13,564 p=21516 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:30:13,575 p=21516 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:30:13,603 p=21516 u=mistral | TASK [Start containers for step 5] ********************************************* >2018-06-22 09:30:14,346 p=21516 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:30:14,413 p=21516 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:32:19,970 p=21516 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:32:19,994 p=21516 u=mistral | TASK [Debug output for task which failed: Start containers for step 5] ********* >2018-06-22 09:32:20,093 p=21516 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-22 09:32:20,107 p=21516 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-06-22 09:32:21,694 p=21516 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd", > "e0f71f706c2a: Already exists", > "121ab4741000: Already exists", > "0ae618c39bcc: Pulling fs layer", > "638813f034d9: Pulling fs layer", > "3d81e0db726d: Pulling fs layer", > "45d2ca06bfd4: Pulling fs layer", > "45d2ca06bfd4: Waiting", > "638813f034d9: Verifying Checksum", > "638813f034d9: Download complete", > "45d2ca06bfd4: Verifying Checksum", > "45d2ca06bfd4: Download complete", > "0ae618c39bcc: Verifying Checksum", > "0ae618c39bcc: Download complete", > "3d81e0db726d: Verifying Checksum", > "3d81e0db726d: Download complete", > "0ae618c39bcc: Pull complete", > "638813f034d9: Pull complete", > "3d81e0db726d: Pull complete", > "45d2ca06bfd4: Pull complete", > "Digest: sha256:47c8d2b902f66ebb2daa42cd1e2cd20192916f4ebc39c07c6fb0101a282dd94a", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-06-19.4", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd ... ", > "2018-06-19.4: Pulling from 192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd", > "a8ff0031dfcb: Already exists", > "c66228eb2ac7: Already exists", > "64612d8109ce: Already exists", > "f05eedf542e8: Pulling fs layer", > "f05eedf542e8: Verifying Checksum", > "f05eedf542e8: Download complete", > "f05eedf542e8: Pull complete", > "Digest: sha256:8f01832e948e67b019eaad9b9d1e6f46f05714e79e14f1b81d6adff9934769bf", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-06-19.4", > "stdout: ", > "stderr: Error: unable to find resource 'openstack-cinder-volume'", > "stdout: 8a8191c487faeb7dd0f7285ef8a6f5605bd3550be93daf4719c352ee6bc7a151", > "stderr: Error: unable to find resource 'openstack-cinder-backup'", > "stdout: a1e81f9f9cdaf9617fd2084fac3b51208fd891c7c0f837647324c67b28a8956a", > "stdout: 0003c832006e8304a48f181ebbcc7429ef8657e8cbaedc14520d86047c746f54", > "stdout: Debug: Runtime environment: puppet_version=4.8.2, ruby_version=2.0.0, run_mode=user, default_encoding=US-ASCII", > "Debug: Evicting cache entry for environment 'production'", > "Debug: Caching environment 'production' (ttl = 0 sec)", > "Debug: Loading external facts from /etc/puppet/modules/openstacklib/facts.d", > "Debug: Loading external facts from /var/lib/puppet/facts.d", > "Info: Loading facts", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /etc/puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /etc/puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /etc/puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /etc/puppet/modules/pacemaker/lib/facter/pcmk_is_remote.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /etc/puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /etc/puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /etc/puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /etc/puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /etc/puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /etc/puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /etc/puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /etc/puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /etc/puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /etc/puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /etc/puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /etc/puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/nic_alias.rb", > "Debug: Loading facts from /etc/puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /etc/puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /etc/puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /etc/puppet/modules/sensu/lib/facter/sensu_version.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /etc/puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/ipa_hostname.rb", > "Debug: Loading facts from /etc/puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_major_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_patch_level.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_default_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/java/lib/facter/java_libjvm_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/haproxy/lib/facter/haproxy_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vcsrepo/lib/facter/vcsrepo_svn_ver.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pacemaker_node_name.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/pacemaker/lib/facter/pcmk_is_remote.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_client_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ssh/lib/facter/ssh_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/ip6tables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_persistent_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/firewall/lib/facter/iptables_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/staging/lib/facter/staging_http_get.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsmaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrarelease.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandraminorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandracmsheapnewsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandrapatchversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramajorversion.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/cassandra/lib/facter/cassandramaxheapsize.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_server_id.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysqld_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/mysql/lib/facter/mysql_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_html_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/git/lib/facter/git_exec_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/collectd_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/collectd/lib/facter/python_dir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/sssd_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/ipaclient/lib/facter/ipa_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_nodename.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/rabbitmq_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/rabbitmq/lib/facter/erl_ssl_path.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/puppet_settings.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/root_home.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/package_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/pe_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/facter_dot_d.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/stdlib/lib/facter/service_provider.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/redis/lib/facter/redis_server_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/apache/lib/facter/apache_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/systemd/lib/facter/systemd.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/elasticsearch/lib/facter/es_facts.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/alt_fqdns.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/nic_alias.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/tripleo/lib/facter/netmask_ipv6.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs_uuid.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/pci_address.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/vswitch/lib/facter/ovs.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/archive/lib/facter/archive_windir.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/sensu/lib/facter/sensu_version.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_workers.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_package_type.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/openstacklib/lib/facter/os_service_default.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/ipa_hostname.rb", > "Debug: Loading facts from /usr/share/openstack-puppet/modules/nova/lib/facter/libvirt_uuid.rb", > "Debug: Facter: Found no suitable resolves of 1 for ec2_metadata", > "Debug: Facter: value for ec2_metadata is still nil", > "Debug: Executing: '/usr/bin/rpm --version'", > "Debug: Failed to load library 'cfpropertylist' for feature 'cfpropertylist'", > "Debug: Executing: '/usr/bin/rpm -ql rpm'", > "Debug: Facter: value for agent_specified_environment is still nil", > "Debug: Facter: value for cfkey is still nil", > "Debug: Facter: Found no suitable resolves of 1 for dhcp_servers", > "Debug: Facter: value for dhcp_servers is still nil", > "Debug: Facter: Found no suitable resolves of 1 for gce", > "Debug: Facter: value for gce is still nil", > "Debug: Facter: value for ipaddress6_br_ex is still nil", > "Debug: Facter: value for ipaddress_br_int is still nil", > "Debug: Facter: value for ipaddress6_br_int is still nil", > "Debug: Facter: value for netmask_br_int is still nil", > "Debug: Facter: value for ipaddress_br_isolated is still nil", > "Debug: Facter: value for ipaddress6_br_isolated is still nil", > "Debug: Facter: value for netmask_br_isolated is still nil", > "Debug: Facter: value for ipaddress_br_tun is still nil", > "Debug: Facter: value for ipaddress6_br_tun is still nil", > "Debug: Facter: value for netmask_br_tun is still nil", > "Debug: Facter: value for ipaddress6_docker0 is still nil", > "Debug: Facter: value for ipaddress6_eth0 is still nil", > "Debug: Facter: value for ipaddress_eth1 is still nil", > "Debug: Facter: value for ipaddress6_eth1 is still nil", > "Debug: Facter: value for netmask_eth1 is still nil", > "Debug: Facter: value for ipaddress_eth2 is still nil", > "Debug: Facter: value for ipaddress6_eth2 is still nil", > "Debug: Facter: value for netmask_eth2 is still nil", > "Debug: Facter: value for ipaddress6_lo is still nil", > "Debug: Facter: value for macaddress_lo is still nil", > "Debug: Facter: value for ipaddress_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_ovs_system is still nil", > "Debug: Facter: value for netmask_ovs_system is still nil", > "Debug: Facter: value for ipaddress6_vlan20 is still nil", > "Debug: Facter: value for ipaddress6_vlan30 is still nil", > "Debug: Facter: value for ipaddress6_vlan40 is still nil", > "Debug: Facter: value for ipaddress6_vlan50 is still nil", > "Debug: Facter: value for ipaddress6 is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iphostnumber", > "Debug: Facter: value for iphostnumber is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistcodename", > "Debug: Facter: value for lsbdistcodename is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistdescription", > "Debug: Facter: value for lsbdistdescription is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistid", > "Debug: Facter: value for lsbdistid is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbdistrelease", > "Debug: Facter: value for lsbdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbmajdistrelease", > "Debug: Facter: value for lsbmajdistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbminordistrelease", > "Debug: Facter: value for lsbminordistrelease is still nil", > "Debug: Facter: Found no suitable resolves of 1 for lsbrelease", > "Debug: Facter: value for lsbrelease is still nil", > "Debug: Facter: Found no suitable resolves of 2 for swapencrypted", > "Debug: Facter: value for swapencrypted is still nil", > "Debug: Facter: value for network_br_int is still nil", > "Debug: Facter: value for network_br_isolated is still nil", > "Debug: Facter: value for network_br_tun is still nil", > "Debug: Facter: value for network_eth1 is still nil", > "Debug: Facter: value for network_eth2 is still nil", > "Debug: Facter: value for network_ovs_system is still nil", > "Debug: Facter: Found no suitable resolves of 1 for processor", > "Debug: Facter: value for processor is still nil", > "Debug: Facter: value for is_rsc is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_region", > "Debug: Facter: value for rsc_region is still nil", > "Debug: Facter: Found no suitable resolves of 1 for rsc_instance_id", > "Debug: Facter: value for rsc_instance_id is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_enforced", > "Debug: Facter: value for selinux_enforced is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_policyversion", > "Debug: Facter: value for selinux_policyversion is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_current_mode", > "Debug: Facter: value for selinux_current_mode is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_config_mode", > "Debug: Facter: value for selinux_config_mode is still nil", > "Debug: Facter: Found no suitable resolves of 1 for selinux_config_policy", > "Debug: Facter: value for selinux_config_policy is still nil", > "Debug: Facter: value for sshdsakey is still nil", > "Debug: Facter: value for sshfp_dsa is still nil", > "Debug: Facter: value for sshrsakey is still nil", > "Debug: Facter: value for sshfp_rsa is still nil", > "Debug: Facter: value for sshecdsakey is still nil", > "Debug: Facter: value for sshfp_ecdsa is still nil", > "Debug: Facter: value for sshed25519key is still nil", > "Debug: Facter: value for sshfp_ed25519 is still nil", > "Debug: Facter: Found no suitable resolves of 1 for system32", > "Debug: Facter: value for system32 is still nil", > "Debug: Facter: value for vlans is still nil", > "Debug: Facter: Found no suitable resolves of 1 for xendomains", > "Debug: Facter: value for xendomains is still nil", > "Debug: Facter: value for zfs_version is still nil", > "Debug: Facter: Found no suitable resolves of 1 for zonename", > "Debug: Facter: value for zonename is still nil", > "Debug: Facter: value for zpool_version is still nil", > "Debug: Facter: value for java_version is still nil", > "Debug: Facter: value for java_major_version is still nil", > "Debug: Facter: value for java_patch_level is still nil", > "Debug: Facter: value for java_default_home is still nil", > "Debug: Facter: value for java_libjvm_path is still nil", > "Debug: Facter: value for ssh_server_version_full is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_server_version_major", > "Debug: Facter: value for ssh_server_version_major is still nil", > "Debug: Facter: Found no suitable resolves of 2 for ssh_server_version_release", > "Debug: Facter: value for ssh_server_version_release is still nil", > "Debug: Facter: Found no suitable resolves of 2 for iptables_persistent_version", > "Debug: Facter: value for iptables_persistent_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for staging_windir", > "Debug: Facter: value for staging_windir is still nil", > "Debug: Facter: value for cassandrarelease is still nil", > "Debug: Facter: value for cassandraminorversion is still nil", > "Debug: Facter: value for cassandrapatchversion is still nil", > "Debug: Facter: value for cassandramajorversion is still nil", > "Debug: Facter: value for mysqld_version is still nil", > "Debug: Facter: value for mysql_version is still nil", > "Debug: Facter: value for collectd_version is still nil", > "Debug: Facter: value for sssd_version is still nil", > "Debug: Facter: value for rabbitmq_nodename is still nil", > "Debug: Facter: value for rabbitmq_version is still nil", > "Debug: Facter: value for erl_ssl_path is still nil", > "Debug: Puppet::Type::Package::ProviderSensu_gem: file /opt/sensu/embedded/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderTdagent: file /opt/td-agent/usr/sbin/td-agent-gem does not exist", > "Debug: Puppet::Type::Package::ProviderAix: file /usr/bin/lslpp does not exist", > "Debug: Puppet::Type::Package::ProviderDpkg: file /usr/bin/dpkg does not exist", > "Debug: Puppet::Type::Package::ProviderApt: file /usr/bin/apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderAptitude: file /usr/bin/aptitude does not exist", > "Debug: Puppet::Type::Package::ProviderAptrpm: file apt-get does not exist", > "Debug: Puppet::Type::Package::ProviderSun: file /usr/bin/pkginfo does not exist", > "Debug: Puppet::Type::Package::ProviderDnf: file dnf does not exist", > "Debug: Puppet::Type::Package::ProviderFink: file /sw/bin/fink does not exist", > "Debug: Puppet::Type::Package::ProviderOpenbsd: file pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderFreebsd: file /usr/sbin/pkg_info does not exist", > "Debug: Puppet::Type::Package::ProviderHpux: file /usr/sbin/swinstall does not exist", > "Debug: Puppet::Type::Package::ProviderNim: file /usr/sbin/nimclient does not exist", > "Debug: Puppet::Type::Package::ProviderOpkg: file opkg does not exist", > "Debug: Puppet::Type::Package::ProviderPacman: file /usr/bin/pacman does not exist", > "Debug: Puppet::Type::Package::ProviderPkg: file /usr/bin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderPkgin: file pkgin does not exist", > "Debug: Puppet::Type::Package::ProviderPkgng: file /usr/local/sbin/pkg does not exist", > "Debug: Puppet::Type::Package::ProviderPortage: file /usr/bin/emerge does not exist", > "Debug: Puppet::Type::Package::ProviderPorts: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPortupgrade: file /usr/local/sbin/portupgrade does not exist", > "Debug: Puppet::Type::Package::ProviderPuppet_gem: file /opt/puppetlabs/puppet/bin/gem does not exist", > "Debug: Puppet::Type::Package::ProviderRug: file /usr/bin/rug does not exist", > "Debug: Puppet::Type::Package::ProviderSunfreeware: file pkg-get does not exist", > "Debug: Puppet::Type::Package::ProviderTdnf: file tdnf does not exist", > "Debug: Puppet::Type::Package::ProviderUp2date: file /usr/sbin/up2date-nox does not exist", > "Debug: Puppet::Type::Package::ProviderUrpmi: file urpmi does not exist", > "Debug: Puppet::Type::Package::ProviderZypper: file /usr/bin/zypper does not exist", > "Debug: Facter: value for pe_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_major_version", > "Debug: Facter: value for pe_major_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_minor_version", > "Debug: Facter: value for pe_minor_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for pe_patch_version", > "Debug: Facter: value for pe_patch_version is still nil", > "Debug: Puppet::Type::Service::ProviderNoop: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderInit: false value when expecting true", > "Debug: Puppet::Type::Service::ProviderDaemontools: file /usr/bin/svc does not exist", > "Debug: Puppet::Type::Service::ProviderDebian: file /usr/sbin/update-rc.d does not exist", > "Debug: Puppet::Type::Service::ProviderGentoo: file /sbin/rc-update does not exist", > "Debug: Puppet::Type::Service::ProviderLaunchd: file /bin/launchctl does not exist", > "Debug: Puppet::Type::Service::ProviderOpenbsd: file /usr/sbin/rcctl does not exist", > "Debug: Puppet::Type::Service::ProviderOpenrc: file /bin/rc-status does not exist", > "Debug: Puppet::Type::Service::ProviderRunit: file /usr/bin/sv does not exist", > "Debug: Puppet::Type::Service::ProviderUpstart: 0 confines (of 4) were true", > "Debug: Facter: value for redis_server_version is still nil", > "Debug: Facter: value for apache_version is still nil", > "Debug: Facter: value for nic_alias is still nil", > "Debug: Facter: value for netmask6_br_int is still nil", > "Debug: Facter: value for netmask6_br_tun is still nil", > "Debug: Facter: value for netmask6_ovs_system is still nil", > "Debug: Facter: value for ovs_uuid is still nil", > "Debug: Facter: value for ovs_version is still nil", > "Debug: Facter: Found no suitable resolves of 2 for archive_windir", > "Debug: Facter: value for archive_windir is still nil", > "Debug: Facter: value for sensu_version is still nil", > "Debug: Facter: value for ipa_hostname is still nil", > "Debug: Facter: value for libvirt_uuid is still nil", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/pacemaker.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::pacemaker from tripleo/profile/base/pacemaker into production", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Debug: hiera(): Hiera JSON backend starting", > "Debug: hiera(): Looking up lookup_options in JSON backend", > "Debug: hiera(): Looking for data source docker", > "Debug: hiera(): Looking for data source heat_config_", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/heat_config_.json, skipping", > "Debug: hiera(): Looking for data source config_step", > "Debug: hiera(): Looking for data source controller_extraconfig", > "Debug: hiera(): Looking for data source extraconfig", > "Debug: hiera(): Looking for data source service_names", > "Debug: hiera(): Looking for data source service_configs", > "Debug: hiera(): Looking for data source controller", > "Debug: hiera(): Looking for data source bootstrap_node", > "Debug: hiera(): Looking for data source all_nodes", > "Debug: hiera(): Looking for data source vip_data", > "Debug: hiera(): Looking for data source net_ip_map", > "Debug: hiera(): Looking for data source RedHat", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/RedHat.json, skipping", > "Debug: hiera(): Looking for data source neutron_bigswitch_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_bigswitch_data.json, skipping", > "Debug: hiera(): Looking for data source neutron_cisco_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/neutron_cisco_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_n1kv_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_n1kv_data.json, skipping", > "Debug: hiera(): Looking for data source midonet_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/midonet_data.json, skipping", > "Debug: hiera(): Looking for data source cisco_aci_data", > "Debug: hiera(): Cannot find datafile /etc/puppet/hieradata/cisco_aci_data.json, skipping", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_node_ips in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_authkey in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::encryption in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::pacemaker::enable_instanceha in JSON backend", > "Debug: hiera(): Looking up step in JSON backend", > "Debug: hiera(): Looking up pcs_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_short_node_names in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_node_ips in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_reconnect_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_monitor_interval in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker_remote_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker_cluster_recheck_interval in JSON backend", > "Debug: hiera(): Looking up tripleo::instanceha in JSON backend", > "Debug: hiera(): Looking up hacluster_pwd in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_bootstrap_node_name in JSON backend", > "Debug: hiera(): Looking up enable_fencing in JSON backend", > "Debug: hiera(): Looking up pacemaker_short_node_names in JSON backend", > "Debug: hiera(): Looking up corosync_ipv6 in JSON backend", > "Debug: hiera(): Looking up corosync_token_timeout in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/init.pp' in environment production", > "Debug: Automatically imported pacemaker from pacemaker into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/params.pp' in environment production", > "Debug: Automatically imported pacemaker::params from pacemaker/params into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/install.pp' in environment production", > "Debug: Automatically imported pacemaker::install from pacemaker/install into production", > "Debug: hiera(): Looking up pacemaker::install::ensure in JSON backend", > "Debug: Resource package[pacemaker] was not determined to be defined", > "Debug: Create new resource package[pacemaker] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pcs] was not determined to be defined", > "Debug: Create new resource package[pcs] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[fence-agents-all] was not determined to be defined", > "Debug: Create new resource package[fence-agents-all] with params {\"ensure\"=>\"present\"}", > "Debug: Resource package[pacemaker-libs] was not determined to be defined", > "Debug: Create new resource package[pacemaker-libs] with params {\"ensure\"=>\"present\"}", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/service.pp' in environment production", > "Debug: Automatically imported pacemaker::service from pacemaker/service into production", > "Debug: hiera(): Looking up pacemaker::service::ensure in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasstatus in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::hasrestart in JSON backend", > "Debug: hiera(): Looking up pacemaker::service::enable in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/corosync.pp' in environment production", > "Debug: Automatically imported pacemaker::corosync from pacemaker/corosync into production", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_members_rrp in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_name in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::cluster_start_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::manage_fw in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_timeout in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::settle_try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::corosync::pcsd_debug in JSON backend", > "Debug: template[inline]: Bound template variables for inline template in 0.00 seconds", > "Debug: template[inline]: Interpolated template inline template in 0.00 seconds", > "Debug: hiera(): Looking up docker_enabled in JSON backend", > "Debug: importing '/etc/puppet/modules/systemd/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/systemctl/daemon_reload.pp' in environment production", > "Debug: Automatically imported systemd::systemctl::daemon_reload from systemd/systemctl/daemon_reload into production", > "Debug: importing '/etc/puppet/modules/systemd/manifests/unit_file.pp' in environment production", > "Debug: importing '/etc/puppet/modules/stdlib/manifests/init.pp' in environment production", > "Debug: Automatically imported systemd::unit_file from systemd/unit_file into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/stonith.pp' in environment production", > "Debug: Automatically imported pacemaker::stonith from pacemaker/stonith into production", > "Debug: hiera(): Looking up pacemaker::stonith::try_sleep in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/property.pp' in environment production", > "Debug: Automatically imported pacemaker::property from pacemaker/property into production", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource_defaults.pp' in environment production", > "Debug: Automatically imported pacemaker::resource_defaults from pacemaker/resource_defaults into production", > "Debug: hiera(): Looking up pacemaker::resource_defaults::defaults in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::post_success_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::tries in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::try_sleep in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::verify_on_create in JSON backend", > "Debug: hiera(): Looking up pacemaker::resource_defaults::ensure in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/cinder/volume_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::cinder::volume_bundle from tripleo/profile/pacemaker/cinder/volume_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::volume_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::volume_bundle::cinder_volume_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::volume_bundle::docker_volumes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::volume_bundle::docker_environment in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::volume_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::volume_bundle::step in JSON backend", > "Debug: hiera(): Looking up cinder_volume_short_bootstrap_node_name in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/cinder/volume.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::cinder::volume from tripleo/profile/base/cinder/volume into production", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_pure_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_dellsc_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_dellemc_unity_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_dellemc_vmax_iscsi_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_dellemc_vnx_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_dellemc_xtremio_iscsi_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_hpelefthand_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_dellps_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_iscsi_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_netapp_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_nfs_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_rbd_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_scaleio_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_vrts_hs_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_enable_nvmeof_backend in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_user_enabled_backends in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::cinder_rbd_client_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::step in JSON backend", > "Debug: hiera(): Looking up cinder_user_enabled_backends in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::cinder_rbd_user_name in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/cinder.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::cinder from tripleo/profile/base/cinder into production", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::cinder_enable_db_purge in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::step in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_rpc_proto in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_rpc_hosts in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_rpc_password in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_rpc_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_rpc_username in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_rpc_use_ssl in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_notify_proto in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_notify_hosts in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_notify_password in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_notify_port in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_notify_username in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::oslomsg_notify_use_ssl in JSON backend", > "Debug: hiera(): Looking up bootstrap_nodeid in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_scheme in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_node_names in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_password in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_port in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_user_name in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_rpc_use_ssl in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_scheme in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_node_names in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_password in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_port in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_user_name in JSON backend", > "Debug: hiera(): Looking up oslo_messaging_notify_use_ssl in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/init.pp' in environment production", > "Debug: Automatically imported cinder from cinder into production", > "Debug: importing '/etc/puppet/modules/cinder/manifests/params.pp' in environment production", > "Debug: Automatically imported cinder::params from cinder/params into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/defaults.pp' in environment production", > "Debug: Automatically imported openstacklib::defaults from openstacklib/defaults into production", > "Debug: hiera(): Looking up cinder::database_connection in JSON backend", > "Debug: hiera(): Looking up cinder::database_idle_timeout in JSON backend", > "Debug: hiera(): Looking up cinder::database_min_pool_size in JSON backend", > "Debug: hiera(): Looking up cinder::database_max_pool_size in JSON backend", > "Debug: hiera(): Looking up cinder::database_max_retries in JSON backend", > "Debug: hiera(): Looking up cinder::database_retry_interval in JSON backend", > "Debug: hiera(): Looking up cinder::database_max_overflow in JSON backend", > "Debug: hiera(): Looking up cinder::rpc_response_timeout in JSON backend", > "Debug: hiera(): Looking up cinder::control_exchange in JSON backend", > "Debug: hiera(): Looking up cinder::rabbit_ha_queues in JSON backend", > "Debug: hiera(): Looking up cinder::rabbit_heartbeat_timeout_threshold in JSON backend", > "Debug: hiera(): Looking up cinder::rabbit_heartbeat_rate in JSON backend", > "Debug: hiera(): Looking up cinder::rabbit_use_ssl in JSON backend", > "Debug: hiera(): Looking up cinder::service_down_time in JSON backend", > "Debug: hiera(): Looking up cinder::report_interval in JSON backend", > "Debug: hiera(): Looking up cinder::kombu_ssl_ca_certs in JSON backend", > "Debug: hiera(): Looking up cinder::kombu_ssl_certfile in JSON backend", > "Debug: hiera(): Looking up cinder::kombu_ssl_keyfile in JSON backend", > "Debug: hiera(): Looking up cinder::kombu_ssl_version in JSON backend", > "Debug: hiera(): Looking up cinder::kombu_reconnect_delay in JSON backend", > "Debug: hiera(): Looking up cinder::kombu_failover_strategy in JSON backend", > "Debug: hiera(): Looking up cinder::kombu_compression in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_durable_queues in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_server_request_prefix in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_broadcast_prefix in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_group_request_prefix in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_container_name in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_idle_timeout in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_trace in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_ssl_ca_file in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_ssl_cert_file in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_ssl_key_file in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_ssl_key_password in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_allow_insecure_clients in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_sasl_mechanisms in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_sasl_config_dir in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_sasl_config_name in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_username in JSON backend", > "Debug: hiera(): Looking up cinder::amqp_password in JSON backend", > "Debug: hiera(): Looking up cinder::package_ensure in JSON backend", > "Debug: hiera(): Looking up cinder::api_paste_config in JSON backend", > "Debug: hiera(): Looking up cinder::use_syslog in JSON backend", > "Debug: hiera(): Looking up cinder::use_stderr in JSON backend", > "Debug: hiera(): Looking up cinder::log_facility in JSON backend", > "Debug: hiera(): Looking up cinder::log_dir in JSON backend", > "Debug: hiera(): Looking up cinder::debug in JSON backend", > "Debug: hiera(): Looking up cinder::storage_availability_zone in JSON backend", > "Debug: hiera(): Looking up cinder::default_availability_zone in JSON backend", > "Debug: hiera(): Looking up cinder::allow_availability_zone_fallback in JSON backend", > "Debug: hiera(): Looking up cinder::enable_v3_api in JSON backend", > "Debug: hiera(): Looking up cinder::lock_path in JSON backend", > "Debug: hiera(): Looking up cinder::image_conversion_dir in JSON backend", > "Debug: hiera(): Looking up cinder::host in JSON backend", > "Debug: hiera(): Looking up cinder::purge_config in JSON backend", > "Debug: hiera(): Looking up cinder::backend_host in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/deps.pp' in environment production", > "Debug: Automatically imported cinder::deps from cinder/deps into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/init.pp' in environment production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/db.pp' in environment production", > "Debug: Automatically imported oslo::db from oslo/db into production", > "Debug: importing '/etc/puppet/modules/openstacklib/manifests/policy/base.pp' in environment production", > "Debug: Automatically imported openstacklib::policy::base from openstacklib/policy/base into production", > "Debug: importing '/etc/puppet/modules/cinder/manifests/db.pp' in environment production", > "Debug: Automatically imported cinder::db from cinder/db into production", > "Debug: hiera(): Looking up cinder::db::database_db_max_retries in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_connection in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_idle_timeout in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_min_pool_size in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_max_pool_size in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_max_retries in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_retry_interval in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_max_overflow in JSON backend", > "Debug: hiera(): Looking up cinder::db::database_pool_timeout in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/logging.pp' in environment production", > "Debug: Automatically imported cinder::logging from cinder/logging into production", > "Debug: hiera(): Looking up cinder::logging::use_syslog in JSON backend", > "Debug: hiera(): Looking up cinder::logging::use_json in JSON backend", > "Debug: hiera(): Looking up cinder::logging::use_journal in JSON backend", > "Debug: hiera(): Looking up cinder::logging::use_stderr in JSON backend", > "Debug: hiera(): Looking up cinder::logging::log_facility in JSON backend", > "Debug: hiera(): Looking up cinder::logging::log_dir in JSON backend", > "Debug: hiera(): Looking up cinder::logging::debug in JSON backend", > "Debug: hiera(): Looking up cinder::logging::logging_context_format_string in JSON backend", > "Debug: hiera(): Looking up cinder::logging::logging_default_format_string in JSON backend", > "Debug: hiera(): Looking up cinder::logging::logging_debug_format_suffix in JSON backend", > "Debug: hiera(): Looking up cinder::logging::logging_exception_prefix in JSON backend", > "Debug: hiera(): Looking up cinder::logging::log_config_append in JSON backend", > "Debug: hiera(): Looking up cinder::logging::default_log_levels in JSON backend", > "Debug: hiera(): Looking up cinder::logging::publish_errors in JSON backend", > "Debug: hiera(): Looking up cinder::logging::fatal_deprecations in JSON backend", > "Debug: hiera(): Looking up cinder::logging::instance_format in JSON backend", > "Debug: hiera(): Looking up cinder::logging::instance_uuid_format in JSON backend", > "Debug: hiera(): Looking up cinder::logging::log_date_format in JSON backend", > "Debug: importing '/etc/puppet/modules/oslo/manifests/log.pp' in environment production", > "Debug: Automatically imported oslo::log from oslo/log into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/messaging/rabbit.pp' in environment production", > "Debug: Automatically imported oslo::messaging::rabbit from oslo/messaging/rabbit into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/messaging/amqp.pp' in environment production", > "Debug: Automatically imported oslo::messaging::amqp from oslo/messaging/amqp into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/messaging/default.pp' in environment production", > "Debug: Automatically imported oslo::messaging::default from oslo/messaging/default into production", > "Debug: importing '/etc/puppet/modules/oslo/manifests/concurrency.pp' in environment production", > "Debug: Automatically imported oslo::concurrency from oslo/concurrency into production", > "Debug: importing '/etc/puppet/modules/cinder/manifests/ceilometer.pp' in environment production", > "Debug: Automatically imported cinder::ceilometer from cinder/ceilometer into production", > "Debug: hiera(): Looking up cinder::ceilometer::notification_driver in JSON backend", > "Debug: hiera(): Looking up cinder::ceilometer::notification_topics in JSON backend", > "Debug: importing '/etc/puppet/modules/oslo/manifests/messaging/notifications.pp' in environment production", > "Debug: Automatically imported oslo::messaging::notifications from oslo/messaging/notifications into production", > "Debug: importing '/etc/puppet/modules/cinder/manifests/config.pp' in environment production", > "Debug: Automatically imported cinder::config from cinder/config into production", > "Debug: hiera(): Looking up cinder::config::cinder_config in JSON backend", > "Debug: hiera(): Looking up cinder::config::api_paste_ini_config in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/glance.pp' in environment production", > "Debug: Automatically imported cinder::glance from cinder/glance into production", > "Debug: hiera(): Looking up cinder::glance::glance_api_servers in JSON backend", > "Debug: hiera(): Looking up cinder::glance::glance_api_version in JSON backend", > "Debug: hiera(): Looking up cinder::glance::glance_num_retries in JSON backend", > "Debug: hiera(): Looking up cinder::glance::glance_api_insecure in JSON backend", > "Debug: hiera(): Looking up cinder::glance::glance_api_ssl_compression in JSON backend", > "Debug: hiera(): Looking up cinder::glance::glance_request_timeout in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/cron/db_purge.pp' in environment production", > "Debug: Automatically imported cinder::cron::db_purge from cinder/cron/db_purge into production", > "Debug: hiera(): Looking up cinder::cron::db_purge::minute in JSON backend", > "Debug: hiera(): Looking up cinder::cron::db_purge::hour in JSON backend", > "Debug: hiera(): Looking up cinder::cron::db_purge::monthday in JSON backend", > "Debug: hiera(): Looking up cinder::cron::db_purge::month in JSON backend", > "Debug: hiera(): Looking up cinder::cron::db_purge::weekday in JSON backend", > "Debug: hiera(): Looking up cinder::cron::db_purge::user in JSON backend", > "Debug: hiera(): Looking up cinder::cron::db_purge::age in JSON backend", > "Debug: hiera(): Looking up cinder::cron::db_purge::destination in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/volume.pp' in environment production", > "Debug: Automatically imported cinder::volume from cinder/volume into production", > "Debug: hiera(): Looking up cinder::volume::package_ensure in JSON backend", > "Debug: hiera(): Looking up cinder::volume::enabled in JSON backend", > "Debug: hiera(): Looking up cinder::volume::manage_service in JSON backend", > "Debug: hiera(): Looking up cinder::volume::volume_clear in JSON backend", > "Debug: hiera(): Looking up cinder::volume::volume_clear_size in JSON backend", > "Debug: hiera(): Looking up cinder::volume::volume_clear_ionice in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/cinder/volume/rbd.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::cinder::volume::rbd from tripleo/profile/base/cinder/volume/rbd into production", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::backend_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::cinder_rbd_backend_host in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::cinder_rbd_ceph_conf in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::cinder_rbd_pool_name in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::cinder_rbd_extra_pools in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::cinder_rbd_secret_uuid in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::volume::rbd::step in JSON backend", > "Debug: hiera(): Looking up cinder::backend::rbd::volume_backend_name in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/backend/rbd.pp' in environment production", > "Debug: Automatically imported cinder::backend::rbd from cinder/backend/rbd into production", > "Debug: importing '/etc/puppet/modules/cinder/manifests/backends.pp' in environment production", > "Debug: Automatically imported cinder::backends from cinder/backends into production", > "Debug: hiera(): Looking up cinder::backends::backend_host in JSON backend", > "Debug: hiera(): Looking up cinder_volume_short_node_names in JSON backend", > "Debug: importing '/etc/puppet/modules/pacemaker/manifests/resource/bundle.pp' in environment production", > "Debug: Automatically imported pacemaker::resource::bundle from pacemaker/resource/bundle into production", > "Debug: hiera(): Looking up systemd::service_limits in JSON backend", > "Debug: hiera(): Looking up systemd::manage_resolved in JSON backend", > "Debug: hiera(): Looking up systemd::resolved_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_networkd in JSON backend", > "Debug: hiera(): Looking up systemd::networkd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::manage_timesyncd in JSON backend", > "Debug: hiera(): Looking up systemd::timesyncd_ensure in JSON backend", > "Debug: hiera(): Looking up systemd::ntp_server in JSON backend", > "Debug: hiera(): Looking up systemd::fallback_ntp_server in JSON backend", > "Debug: importing '/etc/puppet/modules/oslo/manifests/params.pp' in environment production", > "Debug: Automatically imported oslo::params from oslo/params into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/bindings.pp' in environment production", > "Debug: Automatically imported mysql::bindings from mysql/bindings into production", > "Debug: importing '/etc/puppet/modules/mysql/manifests/params.pp' in environment production", > "Debug: Automatically imported mysql::params from mysql/params into production", > "Debug: hiera(): Looking up mysql::bindings::install_options in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::java_enable in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::perl_enable in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::php_enable in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::python_enable in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::ruby_enable in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::client_dev in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::daemon_dev in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::java_package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::java_package_name in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::java_package_provider in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::perl_package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::perl_package_name in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::perl_package_provider in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::php_package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::php_package_name in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::php_package_provider in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::python_package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::python_package_name in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::python_package_provider in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::ruby_package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::ruby_package_name in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::ruby_package_provider in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::client_dev_package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::client_dev_package_name in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::client_dev_package_provider in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::daemon_dev_package_ensure in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::daemon_dev_package_name in JSON backend", > "Debug: hiera(): Looking up mysql::bindings::daemon_dev_package_provider in JSON backend", > "Debug: importing '/etc/puppet/modules/mysql/manifests/bindings/python.pp' in environment production", > "Debug: Automatically imported mysql::bindings::python from mysql/bindings/python into production", > "Debug: Resource package[ceph-common] was not determined to be defined", > "Debug: Create new resource package[ceph-common] with params {\"ensure\"=>\"present\", \"name\"=>\"ceph-common\", \"tag\"=>\"cinder-support-package\"}", > "Debug: Resource file[/etc/sysconfig/openstack-cinder-volume] was not determined to be defined", > "Debug: Create new resource file[/etc/sysconfig/openstack-cinder-volume] with params {\"ensure\"=>\"present\"}", > "Debug: hiera(): Looking up pacemaker::resource::bundle::deep_compare in JSON backend", > "Debug: Adding relationship from Service[pcsd] to Exec[auth-successful-across-all-nodes] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Exec[reauthenticate-across-all-nodes] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[auth-successful-across-all-nodes] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[Create Cluster tripleo_cluster] to Exec[Start Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[Start Cluster tripleo_cluster] to Service[corosync] with 'before'", > "Debug: Adding relationship from Exec[Start Cluster tripleo_cluster] to Service[pacemaker] with 'before'", > "Debug: Adding relationship from Service[corosync] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from Service[pacemaker] to Exec[wait-for-settle] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker] to File[etc-pacemaker-authkey] with 'before'", > "Debug: Adding relationship from File[etc-pacemaker-authkey] to Exec[Create Cluster tripleo_cluster] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property--stonith-enabled] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-cinder-volume-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[openstack-cinder-volume] with 'before'", > "Debug: Adding relationship from Class[Pacemaker] to Class[Pacemaker::Corosync] with 'before'", > "Debug: Adding relationship from File[/etc/systemd/system/resource-agents-deps.target.wants] to Systemd::Unit_file[docker.service] with 'before'", > "Debug: Adding relationship from Systemd::Unit_file[docker.service] to Class[Systemd::Systemctl::Daemon_reload] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::install::begin] to Package[cinder] with 'before'", > "Debug: Adding relationship from Package[cinder] to Anchor[cinder::install::end] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::install::end] to Anchor[cinder::config::begin] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/report_interval] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/service_down_time] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/api_paste_config] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/storage_availability_zone] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/default_availability_zone] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/allow_availability_zone_fallback] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/image_conversion_dir] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/host] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/enable_v3_api] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/glance_api_servers] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/glance_api_version] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/glance_num_retries] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/glance_api_insecure] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/glance_api_ssl_compression] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/glance_request_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/volume_clear] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/volume_clear_size] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/volume_clear_ionice] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/enabled_backends] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/backend_host] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/sqlite_synchronous] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/backend] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/connection] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/slave_connection] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/mysql_sql_mode] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/idle_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/min_pool_size] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/max_pool_size] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/max_retries] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/retry_interval] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/max_overflow] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/connection_debug] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/connection_trace] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/pool_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/use_db_reconnect] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/db_retry_interval] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/db_inc_retry_interval] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/db_max_retry_interval] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/db_max_retries] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[database/use_tpool] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/debug] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/log_config_append] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/log_date_format] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/log_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/log_dir] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/watch_log_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/use_syslog] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/use_journal] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/use_json] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/syslog_log_facility] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/use_stderr] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/logging_context_format_string] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/logging_default_format_string] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/logging_debug_format_suffix] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/logging_exception_prefix] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/logging_user_identity_format] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/default_log_levels] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/publish_errors] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/instance_format] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/instance_uuid_format] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/fatal_deprecations] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/amqp_durable_queues] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/heartbeat_rate] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/kombu_compression] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/kombu_failover_strategy] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/kombu_reconnect_delay] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_interval_max] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_login_method] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_password] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_retry_backoff] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_retry_interval] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/ssl] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_userid] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_hosts] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_port] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_host] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/ssl_ca_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/ssl_cert_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/ssl_key_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_rabbit/ssl_version] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/addressing_mode] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/server_request_prefix] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/broadcast_prefix] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/group_request_prefix] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/rpc_address_prefix] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/notify_address_prefix] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/multicast_address] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/unicast_address] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/anycast_address] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/default_notification_exchange] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/default_rpc_exchange] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/pre_settled] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/container_name] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/idle_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/trace] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/ssl] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/ssl_ca_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/ssl_cert_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/ssl_key_file] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/ssl_key_password] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/allow_insecure_clients] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/sasl_mechanisms] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/sasl_config_dir] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/sasl_config_name] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/sasl_default_realm] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/username] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/password] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/default_send_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_amqp/default_notify_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/rpc_response_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/transport_url] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/control_exchange] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_concurrency/disable_process_locking] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_concurrency/lock_path] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_notifications/driver] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_notifications/transport_url] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[oslo_messaging_notifications/topics] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/volume_backend_name] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/volume_driver] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rbd_ceph_conf] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rbd_user] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rbd_pool] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rbd_max_clone_depth] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rbd_secret_uuid] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rados_connect_timeout] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rados_connection_interval] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rados_connection_retries] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[tripleo_ceph/rbd_store_chunk_size] with 'before'", > "Debug: Adding relationship from Cinder_config[DEFAULT/report_interval] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/service_down_time] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/api_paste_config] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/storage_availability_zone] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/default_availability_zone] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/allow_availability_zone_fallback] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/image_conversion_dir] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/host] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/enable_v3_api] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/glance_api_servers] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/glance_api_version] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/glance_num_retries] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/glance_api_insecure] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/glance_api_ssl_compression] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/glance_request_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/volume_clear] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/volume_clear_size] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/volume_clear_ionice] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/enabled_backends] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/backend_host] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/sqlite_synchronous] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/backend] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/connection] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/slave_connection] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/mysql_sql_mode] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/idle_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/min_pool_size] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/max_pool_size] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/max_retries] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/retry_interval] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/max_overflow] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/connection_debug] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/connection_trace] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/pool_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/use_db_reconnect] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/db_retry_interval] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/db_inc_retry_interval] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/db_max_retry_interval] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/db_max_retries] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[database/use_tpool] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/debug] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/log_config_append] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/log_date_format] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/log_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/log_dir] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/watch_log_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/use_syslog] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/use_journal] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/use_json] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/syslog_log_facility] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/use_stderr] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/logging_context_format_string] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/logging_default_format_string] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/logging_debug_format_suffix] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/logging_exception_prefix] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/logging_user_identity_format] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/default_log_levels] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/publish_errors] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/instance_format] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/instance_uuid_format] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/fatal_deprecations] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/amqp_durable_queues] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/heartbeat_rate] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/kombu_compression] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/kombu_failover_strategy] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/kombu_reconnect_delay] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_interval_max] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_login_method] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_password] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_retry_backoff] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_retry_interval] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/ssl] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_userid] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_hosts] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_port] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_host] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/ssl_ca_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/ssl_cert_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/ssl_key_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_rabbit/ssl_version] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/addressing_mode] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/server_request_prefix] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/broadcast_prefix] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/group_request_prefix] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/rpc_address_prefix] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/notify_address_prefix] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/multicast_address] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/unicast_address] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/anycast_address] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/default_notification_exchange] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/default_rpc_exchange] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/pre_settled] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/container_name] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/idle_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/trace] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/ssl] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/ssl_ca_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/ssl_cert_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/ssl_key_file] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/ssl_key_password] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/allow_insecure_clients] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/sasl_mechanisms] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/sasl_config_dir] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/sasl_config_name] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/sasl_default_realm] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/username] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/password] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/default_send_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_amqp/default_notify_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/rpc_response_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/transport_url] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/control_exchange] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_concurrency/disable_process_locking] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_concurrency/lock_path] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_notifications/driver] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_notifications/transport_url] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[oslo_messaging_notifications/topics] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/volume_backend_name] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/volume_driver] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rbd_ceph_conf] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rbd_user] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rbd_pool] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rbd_max_clone_depth] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rbd_secret_uuid] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rados_connect_timeout] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rados_connection_interval] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rados_connection_retries] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[tripleo_ceph/rbd_store_chunk_size] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::config::end] to Anchor[cinder::db::begin] with 'before'", > "Debug: Adding relationship from Anchor[cinder::db::begin] to Anchor[cinder::db::end] with 'before'", > "Debug: Adding relationship from Anchor[cinder::db::end] to Anchor[cinder::dbsync::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::dbsync::begin] to Anchor[cinder::dbsync::end] with 'before'", > "Debug: Adding relationship from Anchor[cinder::dbsync::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::service::begin] to Service[cinder-volume] with 'notify'", > "Debug: Adding relationship from Service[cinder-volume] to Anchor[cinder::service::end] with 'notify'", > "Debug: Adding relationship from Oslo::Db[cinder_config] to Anchor[cinder::dbsync::begin] with 'before'", > "Debug: Adding relationship from Anchor[cinder::install::begin] to Package[ceph-common] with 'before'", > "Debug: Adding relationship from Package[ceph-common] to Anchor[cinder::install::end] with 'before'", > "Debug: Adding relationship from Package[cinder] to Anchor[cinder::service::end] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::install::end] to Anchor[cinder::service::begin] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::config::end] to Anchor[cinder::service::begin] with 'notify'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.72 seconds", > "Debug: puppet-pacemaker: initialize()", > "Debug: Creating default schedules", > "Info: Applying configuration version '1529674243'", > "Debug: /Stage[main]/Pacemaker/before: subscribes to Class[Pacemaker::Corosync]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]/before: subscribes to Exec[auth-successful-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/before: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_debug_ini]/notify: subscribes to Service[pcsd]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]/notify: subscribes to Exec[reauthenticate-across-all-nodes]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/require: subscribes to User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[wait-for-settle]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/require: subscribes to Class[Pacemaker::Install]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/before: subscribes to Exec[Start Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/require: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/before: subscribes to Service[corosync]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/before: subscribes to Service[pacemaker]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/before: subscribes to File[etc-pacemaker-authkey]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/before: subscribes to Exec[Create Cluster tripleo_cluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property--stonith-enabled]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-cinder-volume-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[openstack-cinder-volume]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/before: subscribes to Systemd::Unit_file[docker.service]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/before: subscribes to Class[Pacemaker]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]/before: subscribes to Package[cinder]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]/before: subscribes to Package[ceph-common]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]/before: subscribes to Anchor[cinder::config::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/report_interval]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/service_down_time]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/api_paste_config]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/storage_availability_zone]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/default_availability_zone]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/allow_availability_zone_fallback]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/image_conversion_dir]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/host]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/enable_v3_api]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/glance_api_servers]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/glance_api_version]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/glance_num_retries]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/glance_api_insecure]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/glance_api_ssl_compression]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/glance_request_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/volume_clear]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/volume_clear_size]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/volume_clear_ionice]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/enabled_backends]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/backend_host]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/sqlite_synchronous]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/backend]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/connection]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/slave_connection]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/mysql_sql_mode]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/idle_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/min_pool_size]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/max_pool_size]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/max_retries]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/retry_interval]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/max_overflow]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/connection_debug]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/connection_trace]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/pool_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/use_db_reconnect]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/db_retry_interval]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/db_inc_retry_interval]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/db_max_retry_interval]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/db_max_retries]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[database/use_tpool]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/debug]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/log_config_append]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/log_date_format]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/log_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/log_dir]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/watch_log_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/use_syslog]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/use_journal]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/use_json]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/syslog_log_facility]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/use_stderr]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/logging_context_format_string]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/logging_default_format_string]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/logging_debug_format_suffix]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/logging_exception_prefix]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/logging_user_identity_format]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/default_log_levels]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/publish_errors]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/instance_format]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/instance_uuid_format]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/fatal_deprecations]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/amqp_durable_queues]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/heartbeat_rate]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/kombu_compression]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/kombu_failover_strategy]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/kombu_reconnect_delay]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_interval_max]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_login_method]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_password]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_retry_backoff]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_retry_interval]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/ssl]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_userid]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_hosts]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_port]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_host]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/ssl_ca_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/ssl_cert_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/ssl_key_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_rabbit/ssl_version]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/addressing_mode]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/server_request_prefix]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/broadcast_prefix]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/group_request_prefix]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/rpc_address_prefix]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/notify_address_prefix]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/multicast_address]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/unicast_address]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/anycast_address]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/default_notification_exchange]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/default_rpc_exchange]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/pre_settled]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/container_name]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/idle_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/trace]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/ssl]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/ssl_ca_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/ssl_cert_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/ssl_key_file]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/ssl_key_password]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/allow_insecure_clients]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/sasl_mechanisms]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/sasl_config_dir]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/sasl_config_name]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/sasl_default_realm]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/username]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/password]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/default_send_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_amqp/default_notify_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/rpc_response_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/transport_url]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/control_exchange]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_concurrency/disable_process_locking]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_concurrency/lock_path]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_notifications/driver]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_notifications/transport_url]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[oslo_messaging_notifications/topics]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/volume_backend_name]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/volume_driver]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rbd_ceph_conf]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rbd_user]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rbd_pool]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rbd_max_clone_depth]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rbd_secret_uuid]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rados_connect_timeout]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rados_connection_interval]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rados_connection_retries]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[tripleo_ceph/rbd_store_chunk_size]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]/before: subscribes to Anchor[cinder::db::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]/before: subscribes to Anchor[cinder::db::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]/notify: subscribes to Anchor[cinder::dbsync::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]/before: subscribes to Anchor[cinder::dbsync::end]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]/notify: subscribes to Service[cinder-volume]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/require: subscribes to Class[Mysql::Bindings]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/require: subscribes to Class[Mysql::Bindings::Python]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/before: subscribes to Anchor[cinder::dbsync::begin]", > "Debug: /Stage[main]/Cinder/Package[cinder]/notify: subscribes to Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Package[cinder]/notify: subscribes to Anchor[cinder::service::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/report_interval]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/service_down_time]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/allow_availability_zone_fallback]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/image_conversion_dir]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/host]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_version]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_num_retries]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_insecure]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_ssl_compression]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_request_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]/require: subscribes to Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Volume/Service[cinder-volume]/notify: subscribes to Anchor[cinder::service::end]", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_size]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_ionice]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Property[cinder-volume-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[openstack-cinder-volume]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/notify: subscribes to Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/sqlite_synchronous]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/backend]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/slave_connection]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/mysql_sql_mode]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/idle_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/min_pool_size]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_pool_size]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/retry_interval]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_overflow]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_debug]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_trace]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/pool_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_db_reconnect]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_retry_interval]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_inc_retry_interval]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retry_interval]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_tpool]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_config_append]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_date_format]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/watch_log_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_syslog]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_journal]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_json]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/syslog_log_facility]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_stderr]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_context_format_string]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_default_format_string]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_debug_format_suffix]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_exception_prefix]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_user_identity_format]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/default_log_levels]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/publish_errors]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_format]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_uuid_format]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/fatal_deprecations]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/amqp_durable_queues]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_rate]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_compression]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_failover_strategy]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_reconnect_delay]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_interval_max]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_login_method]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_password]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_backoff]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_interval]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_userid]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_hosts]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_port]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_host]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_ca_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_cert_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_key_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_version]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/addressing_mode]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/server_request_prefix]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/broadcast_prefix]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/group_request_prefix]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/rpc_address_prefix]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/notify_address_prefix]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/multicast_address]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/unicast_address]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/anycast_address]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notification_exchange]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_rpc_exchange]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/pre_settled]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/container_name]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/idle_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/trace]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_ca_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_cert_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_file]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_password]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/allow_insecure_clients]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_mechanisms]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_dir]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_name]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_default_realm]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/username]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/password]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_send_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notify_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/rpc_response_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/disable_process_locking]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/topics]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_max_clone_depth]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connect_timeout]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_interval]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_retries]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_store_chunk_size]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Package[ceph-common]/before: subscribes to Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]/notify: subscribes to Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: Adding autorequire relationship with User[hacluster]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/report_interval]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/service_down_time]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/allow_availability_zone_fallback]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/image_conversion_dir]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/host]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_version]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_num_retries]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_insecure]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_ssl_compression]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_request_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_size]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_ionice]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Adding autorequire relationship with File[/etc/systemd/system/resource-agents-deps.target.wants]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/sqlite_synchronous]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/backend]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/slave_connection]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/mysql_sql_mode]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/idle_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/min_pool_size]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_pool_size]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/retry_interval]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_overflow]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_debug]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_trace]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/pool_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_db_reconnect]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_retry_interval]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_inc_retry_interval]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retry_interval]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_tpool]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_config_append]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_date_format]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/watch_log_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_syslog]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_journal]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_json]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/syslog_log_facility]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_stderr]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_context_format_string]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_default_format_string]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_debug_format_suffix]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_exception_prefix]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_user_identity_format]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/default_log_levels]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/publish_errors]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_format]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_uuid_format]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/fatal_deprecations]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/amqp_durable_queues]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_rate]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_compression]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_failover_strategy]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_reconnect_delay]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_interval_max]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_login_method]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_password]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_backoff]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_interval]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_userid]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_hosts]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_port]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_host]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_ca_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_cert_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_key_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_version]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/addressing_mode]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/server_request_prefix]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/broadcast_prefix]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/group_request_prefix]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/rpc_address_prefix]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/notify_address_prefix]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/multicast_address]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/unicast_address]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/anycast_address]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notification_exchange]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_rpc_exchange]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/pre_settled]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/container_name]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/idle_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/trace]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_ca_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_cert_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_file]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_password]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/allow_insecure_clients]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_mechanisms]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_dir]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_name]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_default_realm]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/username]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/password]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_send_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notify_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/rpc_response_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/disable_process_locking]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/topics]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_max_clone_depth]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connect_timeout]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_interval]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_retries]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_store_chunk_size]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]: Adding autorequire relationship with File[/etc/sysconfig/openstack-cinder-volume]", > "Debug: Stage[main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Stage[main]: Resource is being skipped, unscheduling all events", > "Debug: Class[Settings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Settings]: Resource is being skipped, unscheduling all events", > "Debug: Class[Main]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Main]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Tripleo::Profile::Base::Pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Pacemaker::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Install]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Pacemaker::Install]: Resource is being skipped, unscheduling all events", > "Debug: Prefetching yum resources for package", > "Debug: Executing '/usr/bin/rpm -qa --nosignature --nodigest --qf '%{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n''", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Install/Package[pcs]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Install/Package[fence-agents-all]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Install/Package[pacemaker-libs]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Pacemaker::Service]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]: The container Class[Tripleo::Profile::Base::Pacemaker] will propagate my refresh event", > "Debug: Systemd::Unit_file[docker.service]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Systemd::Unit_file[docker.service]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Stonith]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Pacemaker::Stonith]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[Disable STONITH]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Pacemaker::Property[Disable STONITH]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Resource_defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Pacemaker::Resource_defaults]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Pacemaker::Cinder::Volume_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Tripleo::Profile::Pacemaker::Cinder::Volume_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Cinder::Volume]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Tripleo::Profile::Base::Cinder::Volume]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Cinder]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Tripleo::Profile::Base::Cinder]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder]: Resource is being skipped, unscheduling all events", > "Debug: Class[Openstacklib::Defaults]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Openstacklib::Defaults]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Deps]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Deps]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::begin]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Db]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Db]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Logging]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Logging]: Resource is being skipped, unscheduling all events", > "Debug: Oslo::Log[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Oslo::Log[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Package[cinder]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Package[cinder]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Resources[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Resources[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: Oslo::Messaging::Rabbit[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Oslo::Messaging::Rabbit[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: Oslo::Messaging::Amqp[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Oslo::Messaging::Amqp[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: Oslo::Messaging::Default[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Oslo::Messaging::Default[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: Oslo::Concurrency[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Oslo::Concurrency[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Ceilometer]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Ceilometer]: Resource is being skipped, unscheduling all events", > "Debug: Oslo::Messaging::Notifications[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Oslo::Messaging::Notifications[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Config]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Glance]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Glance]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Cron::Db_purge]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Cron::Db_purge]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Volume]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Volume]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Cinder::Volume::Rbd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Tripleo::Profile::Base::Cinder::Volume::Rbd]: Resource is being skipped, unscheduling all events", > "Debug: Cinder::Backend::Rbd[tripleo_ceph]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Cinder::Backend::Rbd[tripleo_ceph]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume/Exec[exec-setfacl-openstack-cinder]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume/Exec[exec-setfacl-openstack-cinder]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Backends]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Backends]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[cinder-volume-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Pacemaker::Property[cinder-volume-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: Class[Systemd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Systemd]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/mode: Not managing symlink mode", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Info: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: Scheduling refresh of Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]: The container Systemd::Unit_file[docker.service] will propagate my refresh event", > "Info: Systemd::Unit_file[docker.service]: Unscheduling all events on Systemd::Unit_file[docker.service]", > "Info: Class[Tripleo::Profile::Base::Pacemaker]: Unscheduling all events on Class[Tripleo::Profile::Base::Pacemaker]", > "Debug: Class[Pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: Class[Pacemaker::Corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Pacemaker::Corosync]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Service/Service[pcsd]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Corosync/User[hacluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[auth-successful-across-all-nodes]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/ensure: created", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/ensure: defined content as '{md5}a839b1ab3552f629efbcc7aaf42e7964'", > "Debug: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]: The container Class[Pacemaker::Corosync] will propagate my refresh event", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Service/Service[corosync]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Service/Service[pacemaker]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Resource is being skipped, unscheduling all events", > "Info: Class[Pacemaker::Corosync]: Unscheduling all events on Class[Pacemaker::Corosync]", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Systemd::Systemctl::Daemon_reload]: Resource is being skipped, unscheduling all events", > "Info: Class[Systemd::Systemctl::Daemon_reload]: Unscheduling all events on Class[Systemd::Systemctl::Daemon_reload]", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1uk269s returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1uk269s property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: property exists: property show | grep stonith-enabled | grep false > /dev/null 2>&1 -> ", > "Debug: Class[Oslo::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Oslo::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Params]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Mysql::Params]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Bindings]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Mysql::Bindings]: Resource is being skipped, unscheduling all events", > "Debug: Class[Mysql::Bindings::Python]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Mysql::Bindings::Python]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Mysql::Bindings::Python/Package[python-mysqldb]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Mysql::Bindings::Python/Package[python-mysqldb]: Resource is being skipped, unscheduling all events", > "Debug: Oslo::Db[cinder_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Oslo::Db[cinder_config]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Package[ceph-common]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Package[ceph-common]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::install::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/report_interval]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/report_interval]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/service_down_time]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/service_down_time]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/allow_availability_zone_fallback]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/allow_availability_zone_fallback]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/image_conversion_dir]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/image_conversion_dir]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/host]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/host]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_version]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_version]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_num_retries]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_num_retries]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_insecure]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_insecure]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_ssl_compression]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_ssl_compression]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_request_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_request_timeout]: Resource is being skipped, unscheduling all events", > "Debug: Prefetching crontab resources for cron", > "Debug: looking for crontabs in /var/spool/cron", > "Debug: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_size]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_size]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_ionice]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Volume/Cinder_config[DEFAULT/volume_clear_ionice]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/sqlite_synchronous]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/sqlite_synchronous]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/backend]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/backend]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/slave_connection]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/slave_connection]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/mysql_sql_mode]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/mysql_sql_mode]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/idle_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/idle_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/min_pool_size]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/min_pool_size]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_pool_size]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_pool_size]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/retry_interval]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/retry_interval]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_overflow]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_overflow]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_debug]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_debug]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_trace]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection_trace]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/pool_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/pool_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_db_reconnect]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_db_reconnect]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_retry_interval]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_retry_interval]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_inc_retry_interval]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_inc_retry_interval]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retry_interval]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retry_interval]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_tpool]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/use_tpool]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_config_append]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_config_append]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_date_format]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_date_format]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/watch_log_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/watch_log_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_syslog]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_syslog]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_journal]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_journal]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_json]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_json]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/syslog_log_facility]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/syslog_log_facility]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_stderr]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/use_stderr]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_context_format_string]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_context_format_string]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_default_format_string]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_default_format_string]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_debug_format_suffix]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_debug_format_suffix]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_exception_prefix]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_exception_prefix]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_user_identity_format]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/logging_user_identity_format]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/default_log_levels]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/default_log_levels]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/publish_errors]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/publish_errors]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_format]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_format]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_uuid_format]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/instance_uuid_format]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/fatal_deprecations]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/fatal_deprecations]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/amqp_durable_queues]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/amqp_durable_queues]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_rate]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_rate]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_compression]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_compression]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_failover_strategy]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_failover_strategy]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_missing_consumer_retry_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_reconnect_delay]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/kombu_reconnect_delay]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_interval_max]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_interval_max]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_login_method]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_login_method]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_password]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_password]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_backoff]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_backoff]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_interval]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_retry_interval]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_transient_queues_ttl]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_userid]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_userid]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_virtual_host]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_hosts]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_hosts]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_port]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_port]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_qos_prefetch_count]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_host]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_host]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/rabbit_ha_queues]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_ca_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_ca_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_cert_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_cert_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_key_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_key_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_version]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl_version]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/addressing_mode]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/addressing_mode]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/server_request_prefix]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/server_request_prefix]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/broadcast_prefix]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/broadcast_prefix]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/group_request_prefix]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/group_request_prefix]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/rpc_address_prefix]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/rpc_address_prefix]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/notify_address_prefix]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/notify_address_prefix]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/multicast_address]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/multicast_address]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/unicast_address]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/unicast_address]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/anycast_address]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/anycast_address]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notification_exchange]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notification_exchange]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_rpc_exchange]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_rpc_exchange]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/pre_settled]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/pre_settled]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/container_name]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/container_name]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/idle_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/idle_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/trace]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/trace]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_ca_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_ca_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_cert_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_cert_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_file]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_file]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_password]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/ssl_key_password]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/allow_insecure_clients]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/allow_insecure_clients]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_mechanisms]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_mechanisms]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_dir]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_dir]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_name]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_config_name]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_default_realm]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/sasl_default_realm]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/username]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/username]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/password]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/password]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_send_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_send_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notify_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Amqp[cinder_config]/Cinder_config[oslo_messaging_amqp/default_notify_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/rpc_response_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/rpc_response_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/disable_process_locking]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/disable_process_locking]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/topics]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/topics]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_max_clone_depth]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_max_clone_depth]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connect_timeout]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connect_timeout]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_interval]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_interval]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_retries]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rados_connection_retries]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_store_chunk_size]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_store_chunk_size]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::db::end]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::begin]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::dbsync::end]: Resource is being skipped, unscheduling all events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]: The container Cinder::Backend::Rbd[tripleo_ceph] will propagate my refresh event", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]/ensure: created", > "Info: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]: Scheduling refresh of Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]: The container Cinder::Backend::Rbd[tripleo_ceph] will propagate my refresh event", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Resource is being skipped, unscheduling all events", > "Info: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]: Unscheduling all events on Anchor[cinder::service::begin]", > "Debug: /Stage[main]/Cinder::Volume/Service[cinder-volume]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Volume/Service[cinder-volume]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Resource is being skipped, unscheduling all events", > "Info: Cinder::Backend::Rbd[tripleo_ceph]: Unscheduling all events on Cinder::Backend::Rbd[tripleo_ceph]", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-g2tuuw returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-g2tuuw property show | grep cinder-volume-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep cinder-volume-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1dqgzjb returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1dqgzjb property set --node controller-0 cinder-volume-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1dqgzjb diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1dqgzjb.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 cinder-volume-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Property[cinder-volume-role-controller-0]/Pcmk_property[property-controller-0-cinder-volume-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Property[cinder-volume-role-controller-0]/Pcmk_property[property-controller-0-cinder-volume-role]: The container Pacemaker::Property[cinder-volume-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[cinder-volume-role-controller-0]: Unscheduling all events on Pacemaker::Property[cinder-volume-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[openstack-cinder-volume]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Pacemaker::Resource::Bundle[openstack-cinder-volume]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-mjdevj returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-mjdevj constraint list | grep location-openstack-cinder-volume > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-hcwrif returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-hcwrif resource show openstack-cinder-volume > /dev/null 2>&1", > "Debug: Exists: bundle openstack-cinder-volume exists 1 location exists 1 deep_compare: false", > "Debug: Create: resource exists 1 location exists 1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-edeb7z returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-edeb7z resource bundle create openstack-cinder-volume container docker image=192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest replicas=1 options=\"--ipc=host --privileged=true --user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=cinder-volume-etc-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=cinder-volume-etc-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=cinder-volume-etc-pki-ca-trust-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=cinder-volume-etc-pki-ca-trust-source-anchors source-dir=/etc/pki/ca-trust/source/anchors target-dir=/etc/pki/ca-trust/source/anchors options=ro storage-map id=cinder-volume-etc-pki-tls-certs-ca-bundle.crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=cinder-volume-etc-pki-tls-certs-ca-bundle.trust.crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=cinder-volume-etc-pki-tls-cert.pem source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=cinder-volume-dev-log source-dir=/dev/log target-dir=/dev/log options=rw storage-map id=cinder-volume-etc-ssh-ssh_known_hosts source-dir=/etc/ssh/ssh_known_hosts target-dir=/etc/ssh/ssh_known_hosts options=ro storage-map id=cinder-volume-etc-puppet source-dir=/etc/puppet target-dir=/etc/puppet options=ro storage-map id=cinder-volume-var-lib-kolla-config_files-cinder_volume.json source-dir=/var/lib/kolla/config_files/cinder_volume.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=cinder-volume-var-lib-config-data-puppet-generated-cinder- source-dir=/var/lib/config-data/puppet-generated/cinder/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=cinder-volume-etc-iscsi source-dir=/etc/iscsi target-dir=/var/lib/kolla/config_files/src-iscsid options=ro storage-map id=cinder-volume-etc-ceph source-dir=/etc/ceph target-dir=/var/lib/kolla/config_files/src-ceph options=ro storage-map id=cinder-volume-lib-modules source-dir=/lib/modules target-dir=/lib/modules options=ro storage-map id=cinder-volume-dev- source-dir=/dev/ target-dir=/dev/ options=rw storage-map id=cinder-volume-run- source-dir=/run/ target-dir=/run/ options=rw storage-map id=cinder-volume-sys source-dir=/sys target-dir=/sys options=rw storage-map id=cinder-volume-var-lib-cinder source-dir=/var/lib/cinder target-dir=/var/lib/cinder options=rw storage-map id=cinder-volume-var-log-containers-cinder source-dir=/var/log/containers/cinder target-dir=/var/log/cinder options=rw --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-edeb7z diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-edeb7z.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location openstack-cinder-volume rule resource-discovery=exclusive score=0 cinder-volume-role eq true", > "Debug: location_rule_create: constraint location openstack-cinder-volume rule resource-discovery=exclusive score=0 cinder-volume-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-55693q returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-55693q constraint location openstack-cinder-volume rule resource-discovery=exclusive score=0 cinder-volume-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-55693q diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-55693q.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-7xdx8s returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-7xdx8s resource enable openstack-cinder-volume", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-7xdx8s diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-7xdx8s.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Resource::Bundle[openstack-cinder-volume]/Pcmk_bundle[openstack-cinder-volume]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Resource::Bundle[openstack-cinder-volume]/Pcmk_bundle[openstack-cinder-volume]: The container Pacemaker::Resource::Bundle[openstack-cinder-volume] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[openstack-cinder-volume]: Unscheduling all events on Pacemaker::Resource::Bundle[openstack-cinder-volume]", > "Debug: /Schedule[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Schedule[puppet]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[hourly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Schedule[hourly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[daily]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Schedule[daily]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[weekly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Schedule[weekly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[monthly]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Schedule[monthly]: Resource is being skipped, unscheduling all events", > "Debug: /Schedule[never]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Schedule[never]: Resource is being skipped, unscheduling all events", > "Debug: /Filebucket[puppet]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Filebucket[puppet]: Resource is being skipped, unscheduling all events", > "Debug: Finishing transaction 34673920", > "Debug: Storing state", > "Info: Creating state file /var/lib/puppet/state/state.yaml", > "Debug: Stored state in 0.00 seconds", > "Notice: Applied catalog in 32.49 seconds", > "Changes:", > " Total: 8", > "Events:", > " Success: 8", > "Resources:", > " Skipped: 174", > " Total: 184", > " Out of sync: 8", > " Changed: 8", > "Time:", > " File line: 0.00", > " File: 0.01", > " Pcmk property: 10.52", > " Last run: 1529674278", > " Pcmk bundle: 21.26", > " Config retrieval: 3.04", > " Total: 34.84", > "Version:", > " Config: 1529674243", > " Puppet: 4.8.2", > "Debug: Applying settings catalog for sections main, reporting, metrics", > "Debug: Using settings: adding file resource 'confdir': 'File[/etc/puppet]{:path=>\"/etc/puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'vardir': 'File[/var/lib/puppet]{:path=>\"/var/lib/puppet\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'logdir': 'File[/var/log/puppet]{:path=>\"/var/log/puppet\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'statedir': 'File[/var/lib/puppet/state]{:path=>\"/var/lib/puppet/state\", :mode=>\"1755\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'rundir': 'File[/var/run/puppet]{:path=>\"/var/run/puppet\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'libdir': 'File[/var/lib/puppet/lib]{:path=>\"/var/lib/puppet/lib\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'hiera_config': 'File[/etc/puppet/hiera.yaml]{:path=>\"/etc/puppet/hiera.yaml\", :ensure=>:file, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'preview_outputdir': 'File[/var/lib/puppet/preview]{:path=>\"/var/lib/puppet/preview\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'certdir': 'File[/etc/puppet/ssl/certs]{:path=>\"/etc/puppet/ssl/certs\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'ssldir': 'File[/etc/puppet/ssl]{:path=>\"/etc/puppet/ssl\", :mode=>\"771\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'publickeydir': 'File[/etc/puppet/ssl/public_keys]{:path=>\"/etc/puppet/ssl/public_keys\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'requestdir': 'File[/etc/puppet/ssl/certificate_requests]{:path=>\"/etc/puppet/ssl/certificate_requests\", :mode=>\"755\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'privatekeydir': 'File[/etc/puppet/ssl/private_keys]{:path=>\"/etc/puppet/ssl/private_keys\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'privatedir': 'File[/etc/puppet/ssl/private]{:path=>\"/etc/puppet/ssl/private\", :mode=>\"750\", :owner=>\"puppet\", :group=>\"puppet\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: Using settings: adding file resource 'pluginfactdest': 'File[/var/lib/puppet/facts.d]{:path=>\"/var/lib/puppet/facts.d\", :ensure=>:directory, :loglevel=>:debug, :links=>:follow, :backup=>false}'", > "Debug: /File[/var/lib/puppet/state]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/var/lib/puppet/lib]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/etc/puppet/hiera.yaml]: Adding autorequire relationship with File[/etc/puppet]", > "Debug: /File[/var/lib/puppet/preview]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: /File[/etc/puppet/ssl/certs]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl]: Adding autorequire relationship with File[/etc/puppet]", > "Debug: /File[/etc/puppet/ssl/public_keys]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/certificate_requests]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/private_keys]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/etc/puppet/ssl/private]: Adding autorequire relationship with File[/etc/puppet/ssl]", > "Debug: /File[/var/lib/puppet/facts.d]: Adding autorequire relationship with File[/var/lib/puppet]", > "Debug: Finishing transaction 52782780", > "Debug: Received report to process from controller-0.localdomain", > "Debug: Processing report from controller-0.localdomain with processor Puppet::Reports::Store", > "stderr: + STEP=5", > "+ TAGS=file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle'", > "+ EXTRA_ARGS='--debug --verbose'", > "+ '[' -d /tmp/puppet-etc ']'", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ echo '{\"step\": 5}'", > "+ export FACTER_uuid=docker", > "+ FACTER_uuid=docker", > "+ set +e", > "+ puppet apply --debug --verbose --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle'", > "Failed to get D-Bus connection: Operation not permitted", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/db.pp\", 69]:[\"/etc/puppet/modules/cinder/manifests/init.pp\", 320]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder.pp\", 127]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/volume.pp\", 44]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder/volume.pp\", 117]", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/volume.pp:64:18", > "Warning: ModuleLoader: module 'oslo' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rc=2", > "+ set -e", > "+ set +ux", > "stdout: (cellv2) Running cell_v2 host discovery", > "(cellv2) Waiting 600 seconds for hosts to register", > "(cellv2) compute node compute-0.localdomain has registered", > "(cellv2) All nodes registered", > "(cellv2) Running host discovery...", > "Found 2 cell mappings.", > "Skipping cell0 since it does not contain hosts.", > "Getting computes from cell 'default': 608ab1e5-f50b-46b3-8fbc-8370783b8fa4", > "Creating host mapping for service compute-0.localdomain", > "Found 1 unmapped computes in cell: 608ab1e5-f50b-46b3-8fbc-8370783b8fa4", > "Debug: Facter: value for ec2_public_ipv4 is still nil", > "Debug: Facter: value for ipaddress_vxlan_sys_4789 is still nil", > "Debug: Facter: value for ipaddress6_vxlan_sys_4789 is still nil", > "Debug: Facter: value for netmask_vxlan_sys_4789 is still nil", > "Debug: Facter: value for network_vxlan_sys_4789 is still nil", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/pacemaker/cinder/backup_bundle.pp' in environment production", > "Debug: Automatically imported tripleo::profile::pacemaker::cinder::backup_bundle from tripleo/profile/pacemaker/cinder/backup_bundle into production", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::backup_bundle::bootstrap_node in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::backup_bundle::cinder_backup_docker_image in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::backup_bundle::docker_volumes in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::backup_bundle::docker_environment in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::backup_bundle::pcs_tries in JSON backend", > "Debug: hiera(): Looking up tripleo::profile::pacemaker::cinder::backup_bundle::step in JSON backend", > "Debug: hiera(): Looking up cinder_backup_short_bootstrap_node_name in JSON backend", > "Debug: importing '/etc/puppet/modules/tripleo/manifests/profile/base/cinder/backup.pp' in environment production", > "Debug: Automatically imported tripleo::profile::base::cinder::backup from tripleo/profile/base/cinder/backup into production", > "Debug: hiera(): Looking up tripleo::profile::base::cinder::backup::step in JSON backend", > "Debug: importing '/etc/puppet/modules/cinder/manifests/backup.pp' in environment production", > "Debug: Automatically imported cinder::backup from cinder/backup into production", > "Debug: hiera(): Looking up cinder::backup::enabled in JSON backend", > "Debug: hiera(): Looking up cinder::backup::manage_service in JSON backend", > "Debug: hiera(): Looking up cinder::backup::package_ensure in JSON backend", > "Debug: hiera(): Looking up cinder::backup::backup_manager in JSON backend", > "Debug: hiera(): Looking up cinder::backup::backup_api_class in JSON backend", > "Debug: hiera(): Looking up cinder::backup::backup_name_template in JSON backend", > "Debug: hiera(): Looking up cinder_backup_short_node_names in JSON backend", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_property[property-controller-0-cinder-backup-role] with 'before'", > "Debug: Adding relationship from Exec[wait-for-settle] to Pcmk_bundle[openstack-cinder-backup] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/backup_manager] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/backup_api_class] with 'before'", > "Debug: Adding relationship from Anchor[cinder::config::begin] to Cinder_config[DEFAULT/backup_name_template] with 'before'", > "Debug: Adding relationship from Cinder_config[DEFAULT/backup_manager] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/backup_api_class] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Cinder_config[DEFAULT/backup_name_template] to Anchor[cinder::config::end] with 'notify'", > "Debug: Adding relationship from Anchor[cinder::service::begin] to Service[cinder-backup] with 'notify'", > "Debug: Adding relationship from Service[cinder-backup] to Anchor[cinder::service::end] with 'notify'", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.39 seconds", > "Info: Applying configuration version '1529674302'", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_property[property-controller-0-cinder-backup-role]", > "Debug: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/before: subscribes to Pcmk_bundle[openstack-cinder-backup]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/backup_manager]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/backup_api_class]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::config::begin]/before: subscribes to Cinder_config[DEFAULT/backup_name_template]", > "Debug: /Stage[main]/Cinder::Deps/Anchor[cinder::service::begin]/notify: subscribes to Service[cinder-backup]", > "Debug: /Stage[main]/Cinder::Backup/Service[cinder-backup]/notify: subscribes to Anchor[cinder::service::end]", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_manager]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_api_class]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_name_template]/notify: subscribes to Anchor[cinder::config::end]", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Backup_bundle/Pacemaker::Property[cinder-backup-role-controller-0]/before: subscribes to Pacemaker::Resource::Bundle[openstack-cinder-backup]", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_manager]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_api_class]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_name_template]: Adding autorequire relationship with Anchor[cinder::install::end]", > "Debug: Class[Tripleo::Profile::Pacemaker::Cinder::Backup_bundle]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Tripleo::Profile::Pacemaker::Cinder::Backup_bundle]: Resource is being skipped, unscheduling all events", > "Debug: Class[Tripleo::Profile::Base::Cinder::Backup]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Tripleo::Profile::Base::Cinder::Backup]: Resource is being skipped, unscheduling all events", > "Debug: Class[Cinder::Backup]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Class[Cinder::Backup]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_manager]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_manager]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_api_class]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_api_class]: Resource is being skipped, unscheduling all events", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_name_template]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_name_template]: Resource is being skipped, unscheduling all events", > "Debug: Pacemaker::Property[cinder-backup-role-controller-0]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Pacemaker::Property[cinder-backup-role-controller-0]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-12ri0ij returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-12ri0ij property show | grep stonith-enabled | grep false > /dev/null 2>&1", > "Debug: /Stage[main]/Cinder::Backup/Service[cinder-backup]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: /Stage[main]/Cinder::Backup/Service[cinder-backup]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1o5sebm returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1o5sebm property show | grep cinder-backup-role | grep controller-0 | grep true > /dev/null 2>&1", > "Debug: property exists: property show | grep cinder-backup-role | grep controller-0 | grep true > /dev/null 2>&1 -> false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1oz9xmq returned ", > "Debug: try 1/20: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1oz9xmq property set --node controller-0 cinder-backup-role=true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1oz9xmq diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1oz9xmq.orig returned 0 -> CIB updated", > "Debug: property create: property set --node controller-0 cinder-backup-role=true -> ", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Backup_bundle/Pacemaker::Property[cinder-backup-role-controller-0]/Pcmk_property[property-controller-0-cinder-backup-role]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Backup_bundle/Pacemaker::Property[cinder-backup-role-controller-0]/Pcmk_property[property-controller-0-cinder-backup-role]: The container Pacemaker::Property[cinder-backup-role-controller-0] will propagate my refresh event", > "Info: Pacemaker::Property[cinder-backup-role-controller-0]: Unscheduling all events on Pacemaker::Property[cinder-backup-role-controller-0]", > "Debug: Pacemaker::Resource::Bundle[openstack-cinder-backup]: Not tagged with file, file_line, concat, augeas, pacemaker::resource::bundle, pacemaker::property, pacemaker::constraint::location", > "Debug: Pacemaker::Resource::Bundle[openstack-cinder-backup]: Resource is being skipped, unscheduling all events", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-t6g1q9 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-t6g1q9 constraint list | grep location-openstack-cinder-backup > /dev/null 2>&1", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1hmmk1s returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1hmmk1s resource show openstack-cinder-backup > /dev/null 2>&1", > "Debug: Exists: bundle openstack-cinder-backup exists 1 location exists 1 deep_compare: false", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1qy7j62 returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1qy7j62 resource bundle create openstack-cinder-backup container docker image=192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest replicas=1 options=\"--ipc=host --privileged=true --user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" run-command=\"/bin/bash /usr/local/bin/kolla_start\" network=host storage-map id=cinder-backup-etc-hosts source-dir=/etc/hosts target-dir=/etc/hosts options=ro storage-map id=cinder-backup-etc-localtime source-dir=/etc/localtime target-dir=/etc/localtime options=ro storage-map id=cinder-backup-etc-pki-ca-trust-extracted source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted options=ro storage-map id=cinder-backup-etc-pki-ca-trust-source-anchors source-dir=/etc/pki/ca-trust/source/anchors target-dir=/etc/pki/ca-trust/source/anchors options=ro storage-map id=cinder-backup-etc-pki-tls-certs-ca-bundle.crt source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt options=ro storage-map id=cinder-backup-etc-pki-tls-certs-ca-bundle.trust.crt source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt options=ro storage-map id=cinder-backup-etc-pki-tls-cert.pem source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem options=ro storage-map id=cinder-backup-dev-log source-dir=/dev/log target-dir=/dev/log options=rw storage-map id=cinder-backup-etc-ssh-ssh_known_hosts source-dir=/etc/ssh/ssh_known_hosts target-dir=/etc/ssh/ssh_known_hosts options=ro storage-map id=cinder-backup-etc-puppet source-dir=/etc/puppet target-dir=/etc/puppet options=ro storage-map id=cinder-backup-var-lib-kolla-config_files-cinder_backup.json source-dir=/var/lib/kolla/config_files/cinder_backup.json target-dir=/var/lib/kolla/config_files/config.json options=ro storage-map id=cinder-backup-var-lib-config-data-puppet-generated-cinder- source-dir=/var/lib/config-data/puppet-generated/cinder/ target-dir=/var/lib/kolla/config_files/src options=ro storage-map id=cinder-backup-etc-iscsi source-dir=/etc/iscsi target-dir=/var/lib/kolla/config_files/src-iscsid options=ro storage-map id=cinder-backup-etc-ceph source-dir=/etc/ceph target-dir=/var/lib/kolla/config_files/src-ceph options=ro storage-map id=cinder-backup-dev- source-dir=/dev/ target-dir=/dev/ options=rw storage-map id=cinder-backup-run- source-dir=/run/ target-dir=/run/ options=rw storage-map id=cinder-backup-sys source-dir=/sys target-dir=/sys options=rw storage-map id=cinder-backup-lib-modules source-dir=/lib/modules target-dir=/lib/modules options=ro storage-map id=cinder-backup-var-lib-cinder source-dir=/var/lib/cinder target-dir=/var/lib/cinder options=rw storage-map id=cinder-backup-var-log-containers-cinder source-dir=/var/log/containers/cinder target-dir=/var/log/cinder options=rw --disabled", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1qy7j62 diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1qy7j62.orig returned 0 -> CIB updated", > "Debug: build_pcs_location_rule_cmd: constraint location openstack-cinder-backup rule resource-discovery=exclusive score=0 cinder-backup-role eq true", > "Debug: location_rule_create: constraint location openstack-cinder-backup rule resource-discovery=exclusive score=0 cinder-backup-role eq true", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1e9pwko returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1e9pwko constraint location openstack-cinder-backup rule resource-discovery=exclusive score=0 cinder-backup-role eq true", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1e9pwko diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1e9pwko.orig returned 0 -> CIB updated", > "Debug: backup_cib: /usr/sbin/pcs cluster cib /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1112ncm returned ", > "Debug: /usr/sbin/pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1112ncm resource enable openstack-cinder-backup", > "Debug: push_cib: /usr/sbin/pcs cluster cib-push /var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1112ncm diff-against=/var/lib/pacemaker/cib/puppet-cib-backup20180622-8-1112ncm.orig returned 0 -> CIB updated", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Backup_bundle/Pacemaker::Resource::Bundle[openstack-cinder-backup]/Pcmk_bundle[openstack-cinder-backup]/ensure: created", > "Debug: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Backup_bundle/Pacemaker::Resource::Bundle[openstack-cinder-backup]/Pcmk_bundle[openstack-cinder-backup]: The container Pacemaker::Resource::Bundle[openstack-cinder-backup] will propagate my refresh event", > "Info: Pacemaker::Resource::Bundle[openstack-cinder-backup]: Unscheduling all events on Pacemaker::Resource::Bundle[openstack-cinder-backup]", > "Debug: Finishing transaction 27444540", > "Notice: Applied catalog in 32.82 seconds", > " Total: 6", > " Success: 6", > " Skipped: 158", > " Total: 166", > " Out of sync: 6", > " Changed: 6", > " Pcmk property: 10.75", > " Last run: 1529674338", > " Config retrieval: 2.68", > " Pcmk bundle: 21.57", > " Total: 35.01", > " Config: 1529674302", > "Debug: Finishing transaction 40037060", > "+ CONFIG='include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle'", > "+ puppet apply --debug --verbose --verbose --detailed-exitcodes --summarize --color=false --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location -e 'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle'", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/backup.pp\", 63]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder/backup.pp\", 33]", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/backup.pp:83:18" > ] >} >2018-06-22 09:32:21,758 p=21516 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks5.json exists] ******** >2018-06-22 09:32:22,224 p=21516 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:32:22,245 p=21516 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:32:22,252 p=21516 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-06-22 09:32:22,280 p=21516 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 5] ******************** >2018-06-22 09:32:22,340 p=21516 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:32:22,341 p=21516 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:32:22,360 p=21516 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-06-22 09:32:22,391 p=21516 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 5] *** >2018-06-22 09:32:22,440 p=21516 u=mistral | skipping: [controller-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:32:22,471 p=21516 u=mistral | skipping: [compute-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:32:22,486 p=21516 u=mistral | skipping: [ceph-0] => {"skip_reason": "Conditional result was False"} >2018-06-22 09:32:22,496 p=21516 u=mistral | PLAY [Server Post Deployments] ************************************************* >2018-06-22 09:32:22,522 p=21516 u=mistral | TASK [include] ***************************************************************** >2018-06-22 09:32:22,619 p=21516 u=mistral | TASK [include] ***************************************************************** >2018-06-22 09:32:22,718 p=21516 u=mistral | TASK [include] ***************************************************************** >2018-06-22 09:32:22,824 p=21516 u=mistral | TASK [include] ***************************************************************** >2018-06-22 09:32:22,921 p=21516 u=mistral | TASK [include] ***************************************************************** >2018-06-22 09:32:23,005 p=21516 u=mistral | PLAY [External deployment Post Deploy tasks] *********************************** >2018-06-22 09:32:23,008 p=21516 u=mistral | PLAY RECAP ********************************************************************* >2018-06-22 09:32:23,008 p=21516 u=mistral | ceph-0 : ok=111 changed=49 unreachable=0 failed=0 >2018-06-22 09:32:23,008 p=21516 u=mistral | compute-0 : ok=129 changed=51 unreachable=0 failed=0 >2018-06-22 09:32:23,008 p=21516 u=mistral | controller-0 : ok=172 changed=52 unreachable=0 failed=0 >2018-06-22 09:32:23,008 p=21516 u=mistral | undercloud : ok=21 changed=10 unreachable=0 failed=0
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1594176
:
1453682
| 1453734 |
1454318